bibtex_url
stringlengths 41
53
| proceedings
stringlengths 38
50
| bibtext
stringlengths 528
3.02k
| abstract
stringlengths 17
2.35k
| authors
sequencelengths 1
44
| title
stringlengths 18
190
| id
stringlengths 7
19
| arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 528
values | n_linked_authors
int64 -1
15
| upvotes
int64 -1
77
| num_comments
int64 -1
10
| n_authors
int64 -1
52
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
15
| Spaces
sequencelengths 0
46
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.findings-emnlp.120.bib | https://aclanthology.org/2023.findings-emnlp.120/ | @inproceedings{qiu-etal-2023-brain,
title = "Can Brain Signals Reveal Inner Alignment with Human Languages?",
author = "Qiu, Jielin and
Han, William and
Zhu, Jiacheng and
Xu, Mengdi and
Weber, Douglas and
Li, Bo and
Zhao, Ding",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.120",
doi = "10.18653/v1/2023.findings-emnlp.120",
pages = "1789--1804",
abstract = "Brain Signals, such as Electroencephalography (EEG), and human languages have been widely explored independently for many downstream tasks, however, the connection between them has not been well explored. In this study, we explore the relationship and dependency between EEG and language. To study at the representation level, we introduced \textbf{MTAM}, a \textbf{M}ultimodal \textbf{T}ransformer \textbf{A}lignment \textbf{M}odel, to observe coordinated representations between the two modalities. We used various relationship alignment-seeking techniques, such as Canonical Correlation Analysis and Wasserstein Distance, as loss functions to transfigure features. On downstream applications, sentiment analysis and relation detection, we achieved new state-of-the-art results on two datasets, ZuCo and K-EmoCon. Our method achieved an F1-score improvement of 1.7{\%} on K-EmoCon and 9.3{\%} on Zuco datasets for sentiment analysis, and 7.4{\%} on ZuCo for relation detection. In addition, we provide interpretations of the performance improvement: (1) feature distribution shows the effectiveness of the alignment module for discovering and encoding the relationship between EEG and language; (2) alignment weights show the influence of different language semantics as well as EEG frequency features; (3) brain topographical maps provide an intuitive demonstration of the connectivity in the brain regions. Our code is available at \url{https://github.com/Jason-Qiu/EEG_Language_Alignment}.",
}
| Brain Signals, such as Electroencephalography (EEG), and human languages have been widely explored independently for many downstream tasks, however, the connection between them has not been well explored. In this study, we explore the relationship and dependency between EEG and language. To study at the representation level, we introduced \textbf{MTAM}, a \textbf{M}ultimodal \textbf{T}ransformer \textbf{A}lignment \textbf{M}odel, to observe coordinated representations between the two modalities. We used various relationship alignment-seeking techniques, such as Canonical Correlation Analysis and Wasserstein Distance, as loss functions to transfigure features. On downstream applications, sentiment analysis and relation detection, we achieved new state-of-the-art results on two datasets, ZuCo and K-EmoCon. Our method achieved an F1-score improvement of 1.7{\%} on K-EmoCon and 9.3{\%} on Zuco datasets for sentiment analysis, and 7.4{\%} on ZuCo for relation detection. In addition, we provide interpretations of the performance improvement: (1) feature distribution shows the effectiveness of the alignment module for discovering and encoding the relationship between EEG and language; (2) alignment weights show the influence of different language semantics as well as EEG frequency features; (3) brain topographical maps provide an intuitive demonstration of the connectivity in the brain regions. Our code is available at \url{https://github.com/Jason-Qiu/EEG_Language_Alignment}. | [
"Qiu, Jielin",
"Han, William",
"Zhu, Jiacheng",
"Xu, Mengdi",
"Weber, Douglas",
"Li, Bo",
"Zhao, Ding"
] | Can Brain Signals Reveal Inner Alignment with Human Languages? | findings-emnlp.120 | 2208.06348 | [
"https://github.com/jason-qiu/eeg_language_alignment"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.121.bib | https://aclanthology.org/2023.findings-emnlp.121/ | @inproceedings{zhao-etal-2023-demosg,
title = "{D}emo{SG}: Demonstration-enhanced Schema-guided Generation for Low-resource Event Extraction",
author = "Zhao, Gang and
Gong, Xiaocheng and
Yang, Xinjie and
Dong, Guanting and
Lu, Shudong and
Li, Si",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.121",
doi = "10.18653/v1/2023.findings-emnlp.121",
pages = "1805--1816",
abstract = "Most current Event Extraction (EE) methods focus on the high-resource scenario, which requires a large amount of annotated data and can hardly be applied to low-resource domains. To address EE more effectively with limited resources, we propose the Demonstration-enhanced Schema-guided Generation (DemoSG) model, which benefits low-resource EE from two aspects: Firstly, we propose the demonstration-based learning paradigm for EE to fully use the annotated data, which transforms them into demonstrations to illustrate the extraction process and help the model learn effectively. Secondly, we formulate EE as a natural language generation task guided by schema-based prompts, thereby leveraging label semantics and promoting knowledge transfer in low-resource scenarios. We conduct extensive experiments under in-domain and domain adaptation low-resource settings on three datasets, and study the robustness of DemoSG. The results show that DemoSG significantly outperforms current methods in low-resource scenarios.",
}
| Most current Event Extraction (EE) methods focus on the high-resource scenario, which requires a large amount of annotated data and can hardly be applied to low-resource domains. To address EE more effectively with limited resources, we propose the Demonstration-enhanced Schema-guided Generation (DemoSG) model, which benefits low-resource EE from two aspects: Firstly, we propose the demonstration-based learning paradigm for EE to fully use the annotated data, which transforms them into demonstrations to illustrate the extraction process and help the model learn effectively. Secondly, we formulate EE as a natural language generation task guided by schema-based prompts, thereby leveraging label semantics and promoting knowledge transfer in low-resource scenarios. We conduct extensive experiments under in-domain and domain adaptation low-resource settings on three datasets, and study the robustness of DemoSG. The results show that DemoSG significantly outperforms current methods in low-resource scenarios. | [
"Zhao, Gang",
"Gong, Xiaocheng",
"Yang, Xinjie",
"Dong, Guanting",
"Lu, Shudong",
"Li, Si"
] | DemoSG: Demonstration-enhanced Schema-guided Generation for Low-resource Event Extraction | findings-emnlp.121 | 2310.10481 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.122.bib | https://aclanthology.org/2023.findings-emnlp.122/ | @inproceedings{li-etal-2023-glgr,
title = "{GLGR}: Question-aware Global-to-Local Graph Reasoning for Multi-party Dialogue Reading Comprehension",
author = "Li, Yanling and
Zou, Bowei and
Fan, Yifan and
Li, Xibo and
Aw, Ai Ti and
Hong, Yu",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.122",
doi = "10.18653/v1/2023.findings-emnlp.122",
pages = "1817--1826",
abstract = "Graph reasoning contributes to the integration of discretely-distributed attentive information (clues) for Multi-party Dialogue Reading Comprehension (MDRC). This is attributed primarily to multi-hop reasoning over global conversational structures. However, existing approaches barely apply questions for anti-noise graph reasoning. More seriously, the local semantic structures in utterances are neglected, although they are beneficial for bridging across semantically-related clues. In this paper, we propose a question-aware global-to-local graph reasoning approach. It expands the canonical Interlocutor-Utterance graph by introducing a question node, enabling comprehensive global graph reasoning. More importantly, it constructs a semantic-role graph for each utterance, and accordingly performs local graph reasoning conditioned on the semantic relations. We design a two-stage encoder network to implement the progressive reasoning from the global graph to local. The experiments on the benchmark datasets Molweni and FriendsQA show that our approach yields significant improvements, compared to BERT and ELECTRA baselines. It achieves 73.6{\%} and 77.2{\%} F1-scores on Molweni and FriendsQA, respectively, outperforming state-of-the-art methods that employ different pretrained language models as backbones.",
}
| Graph reasoning contributes to the integration of discretely-distributed attentive information (clues) for Multi-party Dialogue Reading Comprehension (MDRC). This is attributed primarily to multi-hop reasoning over global conversational structures. However, existing approaches barely apply questions for anti-noise graph reasoning. More seriously, the local semantic structures in utterances are neglected, although they are beneficial for bridging across semantically-related clues. In this paper, we propose a question-aware global-to-local graph reasoning approach. It expands the canonical Interlocutor-Utterance graph by introducing a question node, enabling comprehensive global graph reasoning. More importantly, it constructs a semantic-role graph for each utterance, and accordingly performs local graph reasoning conditioned on the semantic relations. We design a two-stage encoder network to implement the progressive reasoning from the global graph to local. The experiments on the benchmark datasets Molweni and FriendsQA show that our approach yields significant improvements, compared to BERT and ELECTRA baselines. It achieves 73.6{\%} and 77.2{\%} F1-scores on Molweni and FriendsQA, respectively, outperforming state-of-the-art methods that employ different pretrained language models as backbones. | [
"Li, Yanling",
"Zou, Bowei",
"Fan, Yifan",
"Li, Xibo",
"Aw, Ai Ti",
"Hong, Yu"
] | GLGR: Question-aware Global-to-Local Graph Reasoning for Multi-party Dialogue Reading Comprehension | findings-emnlp.122 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.123.bib | https://aclanthology.org/2023.findings-emnlp.123/ | @inproceedings{ji-etal-2023-towards,
title = "Towards Mitigating {LLM} Hallucination via Self Reflection",
author = "Ji, Ziwei and
Yu, Tiezheng and
Xu, Yan and
Lee, Nayeon and
Ishii, Etsuko and
Fung, Pascale",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.123",
doi = "10.18653/v1/2023.findings-emnlp.123",
pages = "1827--1843",
abstract = "Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks. However, the practical deployment still faces challenges, notably the issue of {``}hallucination{''}, where models generate plausible-sounding but unfaithful or nonsensical information. This issue becomes particularly critical in the medical domain due to the uncommon professional concepts and potential social risks involved. This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets. Our investigation centers on the identification and comprehension of common problematic answers, with a specific emphasis on hallucination. To tackle this challenge, we present an interactive self-reflection methodology that incorporates knowledge acquisition and answer generation. Through this feedback process, our approach steadily enhances the factuality, consistency, and entailment of the generated answers. Consequently, we harness the interactivity and multitasking ability of LLMs and produce progressively more precise and accurate answers. Experimental results on both automatic and human evaluation demonstrate the superiority of our approach in hallucination reduction compared to baselines.",
}
| Large language models (LLMs) have shown promise for generative and knowledge-intensive tasks including question-answering (QA) tasks. However, the practical deployment still faces challenges, notably the issue of {``}hallucination{''}, where models generate plausible-sounding but unfaithful or nonsensical information. This issue becomes particularly critical in the medical domain due to the uncommon professional concepts and potential social risks involved. This paper analyses the phenomenon of hallucination in medical generative QA systems using widely adopted LLMs and datasets. Our investigation centers on the identification and comprehension of common problematic answers, with a specific emphasis on hallucination. To tackle this challenge, we present an interactive self-reflection methodology that incorporates knowledge acquisition and answer generation. Through this feedback process, our approach steadily enhances the factuality, consistency, and entailment of the generated answers. Consequently, we harness the interactivity and multitasking ability of LLMs and produce progressively more precise and accurate answers. Experimental results on both automatic and human evaluation demonstrate the superiority of our approach in hallucination reduction compared to baselines. | [
"Ji, Ziwei",
"Yu, Tiezheng",
"Xu, Yan",
"Lee, Nayeon",
"Ishii, Etsuko",
"Fung, Pascale"
] | Towards Mitigating LLM Hallucination via Self Reflection | findings-emnlp.123 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.124.bib | https://aclanthology.org/2023.findings-emnlp.124/ | @inproceedings{skobov-bono-2023-making,
title = "Making Body Movement in Sign Language Corpus Accessible for Linguists and Machines with Three-Dimensional Normalization of {M}edia{P}ipe",
author = "Skobov, Victor and
Bono, Mayumi",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.124",
doi = "10.18653/v1/2023.findings-emnlp.124",
pages = "1844--1855",
abstract = "Linguists can access movement in the sign language video corpus through manual annotation or computational methods. The first relies on a predefinition of features, and the second requires technical knowledge. Methods like MediaPipe and OpenPose are now more often used in sign language processing. MediaPipe detects a two-dimensional (2D) body pose in a single image with a limited approximation of the depth coordinate. Such 2D projection of a three-dimensional (3D) body pose limits the potential application of the resulting models outside the capturing camera settings and position. 2D pose data does not provide linguists with direct and human-readable access to the collected movement data. We propose our four main contributions: A novel 3D normalization method for MediaPipe{'}s 2D pose, a novel human-readable way of representing the 3D normalized pose data, an analysis of Japanese Sign Language (JSL) sociolinguistic features using the proposed techniques, where we show how an individual signer can be identified based on unique personal movement patterns suggesting a potential threat to anonymity. Our method outperforms the common 2D normalization on a small, diverse JSL dataset. We demonstrate its benefit for deep learning approaches by significantly outperforming the pose-based state-of-the-art models on the open sign language recognition benchmark.",
}
| Linguists can access movement in the sign language video corpus through manual annotation or computational methods. The first relies on a predefinition of features, and the second requires technical knowledge. Methods like MediaPipe and OpenPose are now more often used in sign language processing. MediaPipe detects a two-dimensional (2D) body pose in a single image with a limited approximation of the depth coordinate. Such 2D projection of a three-dimensional (3D) body pose limits the potential application of the resulting models outside the capturing camera settings and position. 2D pose data does not provide linguists with direct and human-readable access to the collected movement data. We propose our four main contributions: A novel 3D normalization method for MediaPipe{'}s 2D pose, a novel human-readable way of representing the 3D normalized pose data, an analysis of Japanese Sign Language (JSL) sociolinguistic features using the proposed techniques, where we show how an individual signer can be identified based on unique personal movement patterns suggesting a potential threat to anonymity. Our method outperforms the common 2D normalization on a small, diverse JSL dataset. We demonstrate its benefit for deep learning approaches by significantly outperforming the pose-based state-of-the-art models on the open sign language recognition benchmark. | [
"Skobov, Victor",
"Bono, Mayumi"
] | Making Body Movement in Sign Language Corpus Accessible for Linguists and Machines with Three-Dimensional Normalization of MediaPipe | findings-emnlp.124 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.125.bib | https://aclanthology.org/2023.findings-emnlp.125/ | @inproceedings{ruder-etal-2023-xtreme,
title = "{XTREME}-{UP}: A User-Centric Scarce-Data Benchmark for Under-Represented Languages",
author = "Ruder, Sebastian and
Clark, Jonathan and
Gutkin, Alexander and
Kale, Mihir and
Ma, Min and
Nicosia, Massimo and
Rijhwani, Shruti and
Riley, Parker and
Sarr, Jean-Michel and
Wang, Xinyi and
Wieting, John and
Gupta, Nitish and
Katanova, Anna and
Kirov, Christo and
Dickinson, Dana and
Roark, Brian and
Samanta, Bidisha and
Tao, Connie and
Adelani, David and
Axelrod, Vera and
Caswell, Isaac and
Cherry, Colin and
Garrette, Dan and
Ingle, Reeve and
Johnson, Melvin and
Panteleev, Dmitry and
Talukdar, Partha",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.125",
doi = "10.18653/v1/2023.findings-emnlp.125",
pages = "1856--1884",
abstract = "Data scarcity is a crucial issue for the development of highly multilingual NLP systems. Yet for many under-represented languages (ULs) {---} languages for which NLP research is particularly far behind in meeting user needs {---} it is feasible to annotate small amounts of data. Motivated by this, we propose XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather than zero-shot; its focus on user-centric tasks {---} tasks with broad adoption by speakers of high-resource languages; and its focus on under-represented languages where this scarce-data scenario tends to be most realistic. XTREME-UP evaluates the capabilities of language models across 88 under-represented languages over 9 key user-centric technologies including ASR, OCR, MT, and information access tasks that are of general utility. We create new datasets for OCR, autocomplete, semantic parsing, and transliteration, and build on and refine existing datasets for other tasks. XTREME-UP provides methodology for evaluating many modeling scenarios including text only, multi-modal (vision, audio, and text), supervised parameter tuning, and in-context learning. We evaluate commonly used models on the benchmark. We release all code and scripts to train and evaluate models.",
}
| Data scarcity is a crucial issue for the development of highly multilingual NLP systems. Yet for many under-represented languages (ULs) {---} languages for which NLP research is particularly far behind in meeting user needs {---} it is feasible to annotate small amounts of data. Motivated by this, we propose XTREME-UP, a benchmark defined by: its focus on the scarce-data scenario rather than zero-shot; its focus on user-centric tasks {---} tasks with broad adoption by speakers of high-resource languages; and its focus on under-represented languages where this scarce-data scenario tends to be most realistic. XTREME-UP evaluates the capabilities of language models across 88 under-represented languages over 9 key user-centric technologies including ASR, OCR, MT, and information access tasks that are of general utility. We create new datasets for OCR, autocomplete, semantic parsing, and transliteration, and build on and refine existing datasets for other tasks. XTREME-UP provides methodology for evaluating many modeling scenarios including text only, multi-modal (vision, audio, and text), supervised parameter tuning, and in-context learning. We evaluate commonly used models on the benchmark. We release all code and scripts to train and evaluate models. | [
"Ruder, Sebastian",
"Clark, Jonathan",
"Gutkin, Alex",
"er",
"Kale, Mihir",
"Ma, Min",
"Nicosia, Massimo",
"Rijhwani, Shruti",
"Riley, Parker",
"Sarr, Jean-Michel",
"Wang, Xinyi",
"Wieting, John",
"Gupta, Nitish",
"Katanova, Anna",
"Kirov, Christo",
"Dickinson, Dana",
"Roark, Brian",
"Samanta, Bidisha",
"Tao, Connie",
"Adelani, David",
"Axelrod, Vera",
"Caswell, Isaac",
"Cherry, Colin",
"Garrette, Dan",
"Ingle, Reeve",
"Johnson, Melvin",
"Panteleev, Dmitry",
"Talukdar, Partha"
] | XTREME-UP: A User-Centric Scarce-Data Benchmark for Under-Represented Languages | findings-emnlp.125 | 2305.11938 | [
"https://github.com/google-research/xtreme-up"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.126.bib | https://aclanthology.org/2023.findings-emnlp.126/ | @inproceedings{wu-etal-2023-diffuvst,
title = "{D}iffu{VST}: Narrating Fictional Scenes with Global-History-Guided Denoising Models",
author = "Wu, Shengguang and
Yuan, Mei and
Su, Qi",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.126",
doi = "10.18653/v1/2023.findings-emnlp.126",
pages = "1885--1896",
abstract = "Recent advances in image and video creation, especially AI-based image synthesis, have led to the production of numerous visual scenes that exhibit a high level of abstractness and diversity. Consequently, Visual Storytelling (VST), a task that involves generating meaningful and coherent narratives from a collection of images, has become even more challenging and is increasingly desired beyond real-world imagery. While existing VST techniques, which typically use autoregressive decoders, have made significant progress, they suffer from low inference speed and are not well-suited for synthetic scenes. To this end, we propose a novel diffusion-based system DiffuVST, which models the generation of a series of visual descriptions as a single conditional denoising process. The stochastic and non-autoregressive nature of DiffuVST at inference time allows it to generate highly diverse narratives more efficiently. In addition, DiffuVST features a unique design with bi-directional text history guidance and multimodal adapter modules, which effectively improve inter-sentence coherence and image-to-text fidelity. Extensive experiments on the story generation task covering four fictional visual-story datasets demonstrate the superiority of DiffuVST over traditional autoregressive models in terms of both text quality and inference speed.",
}
| Recent advances in image and video creation, especially AI-based image synthesis, have led to the production of numerous visual scenes that exhibit a high level of abstractness and diversity. Consequently, Visual Storytelling (VST), a task that involves generating meaningful and coherent narratives from a collection of images, has become even more challenging and is increasingly desired beyond real-world imagery. While existing VST techniques, which typically use autoregressive decoders, have made significant progress, they suffer from low inference speed and are not well-suited for synthetic scenes. To this end, we propose a novel diffusion-based system DiffuVST, which models the generation of a series of visual descriptions as a single conditional denoising process. The stochastic and non-autoregressive nature of DiffuVST at inference time allows it to generate highly diverse narratives more efficiently. In addition, DiffuVST features a unique design with bi-directional text history guidance and multimodal adapter modules, which effectively improve inter-sentence coherence and image-to-text fidelity. Extensive experiments on the story generation task covering four fictional visual-story datasets demonstrate the superiority of DiffuVST over traditional autoregressive models in terms of both text quality and inference speed. | [
"Wu, Shengguang",
"Yuan, Mei",
"Su, Qi"
] | DiffuVST: Narrating Fictional Scenes with Global-History-Guided Denoising Models | findings-emnlp.126 | 2312.07066 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.127.bib | https://aclanthology.org/2023.findings-emnlp.127/ | @inproceedings{zakizadeh-etal-2023-difair,
title = "{D}i{F}air: A Benchmark for Disentangled Assessment of Gender Knowledge and Bias",
author = "Zakizadeh, Mahdi and
Miandoab, Kaveh and
Pilehvar, Mohammad",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.127",
doi = "10.18653/v1/2023.findings-emnlp.127",
pages = "1897--1914",
abstract = "Numerous debiasing techniques have been proposed to mitigate the gender bias that is prevalent in pretrained language models. These are often evaluated on datasets that check the extent to which the model is gender-neutral in its predictions. Importantly, this evaluation protocol overlooks the possible adverse impact of bias mitigation on useful gender knowledge. To fill this gap, we propose **DiFair**, a manually curated dataset based on masked language modeling objectives. **DiFair** allows us to introduce a unified metric, *gender invariance score*, that not only quantifies a model{'}s biased behavior, but also checks if useful gender knowledge is preserved. We use **DiFair** as a benchmark for a number of widely-used pretained language models and debiasing techniques. Experimental results corroborate previous findings on the existing gender biases, while also demonstrating that although debiasing techniques ameliorate the issue of gender bias, this improvement usually comes at the price of lowering useful gender knowledge of the model.",
}
| Numerous debiasing techniques have been proposed to mitigate the gender bias that is prevalent in pretrained language models. These are often evaluated on datasets that check the extent to which the model is gender-neutral in its predictions. Importantly, this evaluation protocol overlooks the possible adverse impact of bias mitigation on useful gender knowledge. To fill this gap, we propose **DiFair**, a manually curated dataset based on masked language modeling objectives. **DiFair** allows us to introduce a unified metric, *gender invariance score*, that not only quantifies a model{'}s biased behavior, but also checks if useful gender knowledge is preserved. We use **DiFair** as a benchmark for a number of widely-used pretained language models and debiasing techniques. Experimental results corroborate previous findings on the existing gender biases, while also demonstrating that although debiasing techniques ameliorate the issue of gender bias, this improvement usually comes at the price of lowering useful gender knowledge of the model. | [
"Zakizadeh, Mahdi",
"Mi",
"oab, Kaveh",
"Pilehvar, Mohammad"
] | DiFair: A Benchmark for Disentangled Assessment of Gender Knowledge and Bias | findings-emnlp.127 | 2310.14329 | [
"https://github.com/mzakizadeh/difair_public"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.128.bib | https://aclanthology.org/2023.findings-emnlp.128/ | @inproceedings{oh-schuler-2023-transformer,
title = "Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens",
author = "Oh, Byung-Doh and
Schuler, William",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.128",
doi = "10.18653/v1/2023.findings-emnlp.128",
pages = "1915--1921",
abstract = "Recent psycholinguistic studies have drawn conflicting conclusions about the relationship between the quality of a language model and the ability of its surprisal estimates to predict human reading times, which has been speculated to be due to the large gap in both the amount of training data and model capacity across studies. The current work aims to consolidate these findings by evaluating surprisal estimates from Transformer-based language model variants that vary systematically in the amount of training data and model capacity on their ability to predict human reading times. The results show that surprisal estimates from most variants with contemporary model capacities provide the best fit after seeing about two billion training tokens, after which they begin to diverge from humanlike expectations. Additionally, newly-trained smaller model variants reveal a {`}tipping point{'} at convergence, after which the decrease in language model perplexity begins to result in poorer fits to human reading times. These results suggest that the massive amount of training data is mainly responsible for the poorer fit achieved by surprisal from larger pre-trained language models, and that a certain degree of model capacity is necessary for Transformer-based language models to capture humanlike expectations.",
}
| Recent psycholinguistic studies have drawn conflicting conclusions about the relationship between the quality of a language model and the ability of its surprisal estimates to predict human reading times, which has been speculated to be due to the large gap in both the amount of training data and model capacity across studies. The current work aims to consolidate these findings by evaluating surprisal estimates from Transformer-based language model variants that vary systematically in the amount of training data and model capacity on their ability to predict human reading times. The results show that surprisal estimates from most variants with contemporary model capacities provide the best fit after seeing about two billion training tokens, after which they begin to diverge from humanlike expectations. Additionally, newly-trained smaller model variants reveal a {`}tipping point{'} at convergence, after which the decrease in language model perplexity begins to result in poorer fits to human reading times. These results suggest that the massive amount of training data is mainly responsible for the poorer fit achieved by surprisal from larger pre-trained language models, and that a certain degree of model capacity is necessary for Transformer-based language models to capture humanlike expectations. | [
"Oh, Byung-Doh",
"Schuler, William"
] | Transformer-Based Language Model Surprisal Predicts Human Reading Times Best with About Two Billion Training Tokens | findings-emnlp.128 | 2304.11389 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.129.bib | https://aclanthology.org/2023.findings-emnlp.129/ | @inproceedings{li-etal-2023-explaincpe,
title = "{E}xplain{CPE}: A Free-text Explanation Benchmark of {C}hinese Pharmacist Examination",
author = "Li, Dongfang and
Yu, Jindi and
Hu, Baotian and
Xu, Zhenran and
Zhang, Min",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.129",
doi = "10.18653/v1/2023.findings-emnlp.129",
pages = "1922--1940",
abstract = "In the field of Large Language Models (LLMs), researchers are increasingly exploring their effectiveness across a wide range of tasks. However, a critical area that requires further investigation is the interpretability of these models, particularly the ability to generate rational explanations for their decisions. Most existing explanation datasets are limited to the English language and the general domain, which leads to a scarcity of linguistic diversity and a lack of resources in specialized domains, such as medical. To mitigate this, we propose ExplainCPE, a challenging medical dataset consisting of over 7K problems from Chinese Pharmacist Examination, specifically tailored to assess the model-generated explanations. From the overall results, only GPT-4 passes the pharmacist examination with a 75.7{\%} accuracy, while other models like ChatGPT fail. Further detailed analysis of LLM-generated explanations reveals the limitations of LLMs in understanding medical text and executing computational reasoning. With the increasing importance of AI safety and trustworthiness, ExplainCPE takes a step towards improving and evaluating the interpretability of LLMs in the medical domain.",
}
| In the field of Large Language Models (LLMs), researchers are increasingly exploring their effectiveness across a wide range of tasks. However, a critical area that requires further investigation is the interpretability of these models, particularly the ability to generate rational explanations for their decisions. Most existing explanation datasets are limited to the English language and the general domain, which leads to a scarcity of linguistic diversity and a lack of resources in specialized domains, such as medical. To mitigate this, we propose ExplainCPE, a challenging medical dataset consisting of over 7K problems from Chinese Pharmacist Examination, specifically tailored to assess the model-generated explanations. From the overall results, only GPT-4 passes the pharmacist examination with a 75.7{\%} accuracy, while other models like ChatGPT fail. Further detailed analysis of LLM-generated explanations reveals the limitations of LLMs in understanding medical text and executing computational reasoning. With the increasing importance of AI safety and trustworthiness, ExplainCPE takes a step towards improving and evaluating the interpretability of LLMs in the medical domain. | [
"Li, Dongfang",
"Yu, Jindi",
"Hu, Baotian",
"Xu, Zhenran",
"Zhang, Min"
] | ExplainCPE: A Free-text Explanation Benchmark of Chinese Pharmacist Examination | findings-emnlp.129 | 2305.12945 | [
"https://github.com/hitsz-tmg/explaincpe"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.130.bib | https://aclanthology.org/2023.findings-emnlp.130/ | @inproceedings{sonkar-etal-2023-class,
title = "{CLASS}: A Design Framework for Building Intelligent Tutoring Systems Based on Learning Science principles",
author = "Sonkar, Shashank and
Liu, Naiming and
Mallick, Debshila and
Baraniuk, Richard",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.130",
doi = "10.18653/v1/2023.findings-emnlp.130",
pages = "1941--1961",
abstract = "We present a design framework called Conversational Learning with Analytical Step-by-Step Strategies (CLASS) for building advanced Intelligent Tutoring Systems (ITS) powered by high-performance Large Language Models (LLMs). The CLASS framework empowers ITS with two key capabilities. First, through a carefully curated scaffolding dataset, CLASS equips ITS with essential problem-solving strategies, enabling it to provide tutor-like, step-by-step guidance to students. Second, by using a dynamic conversational dataset, CLASS assists ITS in facilitating natural language interactions, fostering engaging student-tutor conversations. The CLASS framework also provides valuable insights into ITS{'}s internal decision-making process which allows seamless integration of user feedback, thus enabling continuous refinement and improvement. We also present a proof-of-concept ITS, referred to as SPOCK, which is trained using the CLASS framework with a focus on introductory college level biology content. A carefully constructed protocol was developed for SPOCK{'}s preliminary evaluation, examining aspects such as the factual accuracy and relevance of its responses. Experts in the field of biology offered favorable remarks, particularly highlighting SPOCK{'}s capability to break down questions into manageable subproblems and provide encouraging responses to students.",
}
| We present a design framework called Conversational Learning with Analytical Step-by-Step Strategies (CLASS) for building advanced Intelligent Tutoring Systems (ITS) powered by high-performance Large Language Models (LLMs). The CLASS framework empowers ITS with two key capabilities. First, through a carefully curated scaffolding dataset, CLASS equips ITS with essential problem-solving strategies, enabling it to provide tutor-like, step-by-step guidance to students. Second, by using a dynamic conversational dataset, CLASS assists ITS in facilitating natural language interactions, fostering engaging student-tutor conversations. The CLASS framework also provides valuable insights into ITS{'}s internal decision-making process which allows seamless integration of user feedback, thus enabling continuous refinement and improvement. We also present a proof-of-concept ITS, referred to as SPOCK, which is trained using the CLASS framework with a focus on introductory college level biology content. A carefully constructed protocol was developed for SPOCK{'}s preliminary evaluation, examining aspects such as the factual accuracy and relevance of its responses. Experts in the field of biology offered favorable remarks, particularly highlighting SPOCK{'}s capability to break down questions into manageable subproblems and provide encouraging responses to students. | [
"Sonkar, Shashank",
"Liu, Naiming",
"Mallick, Debshila",
"Baraniuk, Richard"
] | CLASS: A Design Framework for Building Intelligent Tutoring Systems Based on Learning Science principles | findings-emnlp.130 | 2305.13272 | [
"https://github.com/luffycodes/tutorbot-spock"
] | https://huggingface.co/papers/2305.13272 | 1 | 0 | 0 | 4 | [
"luffycodes/tutorbot-spock-bio-llama-diff",
"luffycodes/nash-vicuna-13b-v1dot5-ep2-w-rag-w-simple",
"luffycodes/llama-class-shishya-7b-ep3",
"luffycodes/vicuna-class-shishya-all-hal-7b-ep3",
"luffycodes/vicuna-class-shishya-7b-ep3",
"luffycodes/vicuna-class-shishya-ac-hal-7b-ep3",
"luffycodes/vicuna-class-tutor-7b-ep3",
"luffycodes/vicuna-class-shishya-ac-hal-13b-ep3",
"luffycodes/vicuna-class-shishya-all-hal-13b-ep3",
"luffycodes/vicuna-class-shishya-13b-ep3",
"luffycodes/vicuna-class-tutor-13b-ep3",
"luffycodes/vicuna-mmlu-val-mcq-7b-ep2",
"luffycodes/vicuna-mmlu-val-only-correct-mcq-7b-ep2",
"luffycodes/vicuna-mmlu-val-mcq-hal-7b-ep2",
"namirocks/vicuna-tutor-shishya-model-7b-ep3",
"namirocks/vicuna-class-student-all-hal-7b",
"luffycodes/llama-shishya-7b-ep3-v1",
"luffycodes/llama-shishya-7b-ep3-v2"
] | [
"luffycodes/Tutorbot-Spock-Bio-Dataset"
] | [
"open-llm-leaderboard/open_llm_leaderboard",
"Intel/low_bit_open_llm_leaderboard",
"BAAI/open_cn_llm_leaderboard",
"gsaivinay/open_llm_leaderboard",
"GTBench/GTBench",
"felixz/open_llm_leaderboard",
"OPTML-Group/UnlearnCanvas-Benchmark",
"Vikhrmodels/small-shlepa-lb",
"neubla/neubla-llm-evaluation-board",
"rodrigomasini/data_only_open_llm_leaderboard",
"Docfile/open_llm_leaderboard",
"smothiki/open_llm_leaderboard",
"0x1668/open_llm_leaderboard",
"pngwn/open_llm_leaderboard-check",
"asir0z/open_llm_leaderboard",
"kbmlcoding/open_llm_leaderboard_free",
"aichampions/open_llm_leaderboard",
"Adeco/open_llm_leaderboard",
"anirudh937/open_llm_leaderboard",
"smothiki/open_llm_leaderboard2"
] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.131.bib | https://aclanthology.org/2023.findings-emnlp.131/ | @inproceedings{zhao-etal-2023-normal,
title = "Normal-Abnormal Decoupling Memory for Medical Report Generation",
author = "Zhao, Guosheng and
Yan, Yan and
Zhao, Zijian",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.131",
doi = "10.18653/v1/2023.findings-emnlp.131",
pages = "1962--1977",
abstract = "The automatic generation of medical reports plays a crucial role in clinical automation. In contrast to natural images, radiological images exhibit a high degree of similarity, while medical data are prone to data bias and complex noise, posing challenges for existing methods in capturing nuanced visual information. To address these challenges, we introduce a novel normal-abnormal semantic decoupling network that utilizes abnormal pattern memory. Different from directly optimizing the network using medical reports, we optimize visual extraction through the extraction of abnormal semantics from the reports. Moreover, we independently learn normal semantics based on abnormal semantics, ensuring that the optimization of the visual network remains unaffected by normal semantics learning. Then, we divided the words in the report into four parts: normal/abnormal sentences and normal/abnormal semantics, optimizing the network with distinct weights for each partition. The two semantic components, along with visual information, are seamlessly integrated to facilitate the generation of precise and coherent reports. This approach mitigates the impact of noisy normal semantics and reports. Moreover, we develop a novel encoder for abnormal pattern memory, which improves the network{'}s ability to detect anomalies by capturing and embedding the abnormal patterns of images in the visual encoder. This approach demonstrates excellent performance on the benchmark MIMIC-CXR, surpassing the current state-of-the-art methods.",
}
| The automatic generation of medical reports plays a crucial role in clinical automation. In contrast to natural images, radiological images exhibit a high degree of similarity, while medical data are prone to data bias and complex noise, posing challenges for existing methods in capturing nuanced visual information. To address these challenges, we introduce a novel normal-abnormal semantic decoupling network that utilizes abnormal pattern memory. Different from directly optimizing the network using medical reports, we optimize visual extraction through the extraction of abnormal semantics from the reports. Moreover, we independently learn normal semantics based on abnormal semantics, ensuring that the optimization of the visual network remains unaffected by normal semantics learning. Then, we divided the words in the report into four parts: normal/abnormal sentences and normal/abnormal semantics, optimizing the network with distinct weights for each partition. The two semantic components, along with visual information, are seamlessly integrated to facilitate the generation of precise and coherent reports. This approach mitigates the impact of noisy normal semantics and reports. Moreover, we develop a novel encoder for abnormal pattern memory, which improves the network{'}s ability to detect anomalies by capturing and embedding the abnormal patterns of images in the visual encoder. This approach demonstrates excellent performance on the benchmark MIMIC-CXR, surpassing the current state-of-the-art methods. | [
"Zhao, Guosheng",
"Yan, Yan",
"Zhao, Zijian"
] | Normal-Abnormal Decoupling Memory for Medical Report Generation | findings-emnlp.131 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.132.bib | https://aclanthology.org/2023.findings-emnlp.132/ | @inproceedings{pfeiffer-etal-2023-mmt5,
title = "mm{T}5: Modular Multilingual Pre-Training Solves Source Language Hallucinations",
author = "Pfeiffer, Jonas and
Piccinno, Francesco and
Nicosia, Massimo and
Wang, Xinyi and
Reid, Machel and
Ruder, Sebastian",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.132",
doi = "10.18653/v1/2023.findings-emnlp.132",
pages = "1978--2008",
abstract = "Multilingual sequence-to-sequence models perform poorly with increased language coverage and fail to consistently generate text in the correct target language in few-shot settings. To address these challenges, we propose mmT5, a modular multilingual sequence-to-sequence model. mmT5 utilizes language-specific modules during pre-training, which disentangle language-specific information from language-agnostic information. We identify representation drift during fine-tuning as a key limitation of modular generative models and develop strategies that enable effective zero-shot transfer. Our model outperforms mT5 at the same parameter sizes by a large margin on representative natural language understanding and generation tasks in 40+ languages. Compared to mT5, mmT5 raises the rate of generating text in the correct language under zero-shot settings from 7{\%} to 99{\%}, thereby greatly alleviating the source language hallucination problem.",
}
| Multilingual sequence-to-sequence models perform poorly with increased language coverage and fail to consistently generate text in the correct target language in few-shot settings. To address these challenges, we propose mmT5, a modular multilingual sequence-to-sequence model. mmT5 utilizes language-specific modules during pre-training, which disentangle language-specific information from language-agnostic information. We identify representation drift during fine-tuning as a key limitation of modular generative models and develop strategies that enable effective zero-shot transfer. Our model outperforms mT5 at the same parameter sizes by a large margin on representative natural language understanding and generation tasks in 40+ languages. Compared to mT5, mmT5 raises the rate of generating text in the correct language under zero-shot settings from 7{\%} to 99{\%}, thereby greatly alleviating the source language hallucination problem. | [
"Pfeiffer, Jonas",
"Piccinno, Francesco",
"Nicosia, Massimo",
"Wang, Xinyi",
"Reid, Machel",
"Ruder, Sebastian"
] | mmT5: Modular Multilingual Pre-Training Solves Source Language Hallucinations | findings-emnlp.132 | 2305.14224 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.133.bib | https://aclanthology.org/2023.findings-emnlp.133/ | @inproceedings{xia-etal-2023-imagenetvc,
title = "{I}mage{N}et{VC}: Zero- and Few-Shot Visual Commonsense Evaluation on 1000 {I}mage{N}et Categories",
author = "Xia, Heming and
Dong, Qingxiu and
Li, Lei and
Xu, Jingjing and
Liu, Tianyu and
Qin, Ziwei and
Sui, Zhifang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.133",
doi = "10.18653/v1/2023.findings-emnlp.133",
pages = "2009--2026",
abstract = "Recently, Large Language Models (LLMs) have been serving as general-purpose interfaces, posing a significant demand for comprehensive visual knowledge. However, it remains unclear how well current LLMs and their visually augmented counterparts (VaLMs) can master visual commonsense knowledge. To investigate this, we propose ImageNetVC, a human-annotated dataset specifically designed for zero- and few-shot visual commonsense evaluation across 1,000 ImageNet categories. Utilizing ImageNetVC, we benchmark the fundamental visual commonsense knowledge of both unimodal LLMs and VaLMs. Furthermore, we analyze the factors affecting the visual commonsense knowledge of large-scale models, providing insights into the development of language models enriched with visual commonsense knowledge. Our code and dataset are available at https://github.com/hemingkx/ImageNetVC.",
}
| Recently, Large Language Models (LLMs) have been serving as general-purpose interfaces, posing a significant demand for comprehensive visual knowledge. However, it remains unclear how well current LLMs and their visually augmented counterparts (VaLMs) can master visual commonsense knowledge. To investigate this, we propose ImageNetVC, a human-annotated dataset specifically designed for zero- and few-shot visual commonsense evaluation across 1,000 ImageNet categories. Utilizing ImageNetVC, we benchmark the fundamental visual commonsense knowledge of both unimodal LLMs and VaLMs. Furthermore, we analyze the factors affecting the visual commonsense knowledge of large-scale models, providing insights into the development of language models enriched with visual commonsense knowledge. Our code and dataset are available at https://github.com/hemingkx/ImageNetVC. | [
"Xia, Heming",
"Dong, Qingxiu",
"Li, Lei",
"Xu, Jingjing",
"Liu, Tianyu",
"Qin, Ziwei",
"Sui, Zhifang"
] | ImageNetVC: Zero- and Few-Shot Visual Commonsense Evaluation on 1000 ImageNet Categories | findings-emnlp.133 | 2305.15028 | [
"https://github.com/hemingkx/imagenetvc"
] | https://huggingface.co/papers/2305.15028 | 1 | 1 | 0 | 6 | [] | [
"hemingkx/ImageNetVC"
] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.134.bib | https://aclanthology.org/2023.findings-emnlp.134/ | @inproceedings{fetahu-etal-2023-multiconer,
title = "{M}ulti{C}o{NER} v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition",
author = "Fetahu, Besnik and
Chen, Zhiyu and
Kar, Sudipta and
Rokhlenko, Oleg and
Malmasi, Shervin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.134",
doi = "10.18653/v1/2023.findings-emnlp.134",
pages = "2027--2051",
abstract = "We present MULTICONER V2, a dataset for fine-grained Named Entity Recognition covering 33 entity classes across 12 languages, in both monolingual and multilingual settings. This dataset aims to tackle the following practical challenges in NER: (i) effective handling of fine-grained classes that include complex entities like movie titles, and (ii) performance degradation due to noise generated from typing mistakes or OCR errors. The dataset is compiled from open resources like Wikipedia and Wikidata, and is publicly available. Evaluation based on the XLM-RoBERTa baseline highlights the unique challenges posed by MULTICONER V2: (i) the fine-grained taxonomy is challenging, where the scores are low with macro-F1=0.63 (across all languages), and (ii) the corruption strategy significantly impairs performance, with entity corruption resulting in 9{\%} lower performance relative to non-entity corruptions across all languages. This highlights the greater impact of entity noise in contrast to context noise.",
}
| We present MULTICONER V2, a dataset for fine-grained Named Entity Recognition covering 33 entity classes across 12 languages, in both monolingual and multilingual settings. This dataset aims to tackle the following practical challenges in NER: (i) effective handling of fine-grained classes that include complex entities like movie titles, and (ii) performance degradation due to noise generated from typing mistakes or OCR errors. The dataset is compiled from open resources like Wikipedia and Wikidata, and is publicly available. Evaluation based on the XLM-RoBERTa baseline highlights the unique challenges posed by MULTICONER V2: (i) the fine-grained taxonomy is challenging, where the scores are low with macro-F1=0.63 (across all languages), and (ii) the corruption strategy significantly impairs performance, with entity corruption resulting in 9{\%} lower performance relative to non-entity corruptions across all languages. This highlights the greater impact of entity noise in contrast to context noise. | [
"Fetahu, Besnik",
"Chen, Zhiyu",
"Kar, Sudipta",
"Rokhlenko, Oleg",
"Malmasi, Shervin"
] | MultiCoNER v2: a Large Multilingual dataset for Fine-grained and Noisy Named Entity Recognition | findings-emnlp.134 | 2310.13213 | [
""
] | https://huggingface.co/papers/2310.13213 | 0 | 0 | 0 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.135.bib | https://aclanthology.org/2023.findings-emnlp.135/ | @inproceedings{zhang-wang-2023-query,
title = "A Query-Parallel Machine Reading Comprehension Framework for Low-resource {NER}",
author = "Zhang, Yuhao and
Wang, Yongliang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.135",
doi = "10.18653/v1/2023.findings-emnlp.135",
pages = "2052--2065",
abstract = "Named entity recognition (NER) is a fundamental task in natural language processing. Recently, NER has been formulated as a machine reading comprehension (MRC) task, in which manually-crafted queries are used to extract entities of different types. However, current MRC-based NER techniques are limited to extracting a single type of entities at a time and are largely geared towards resource-rich settings. This renders them inefficient during the inference phase, while also leaving their potential untapped for utilization in low-resource settings. We suggest a query-parallel MRC-based approach to address these issues, which is capable of extracting multiple entity types concurrently and is applicable to both resource-rich and resource-limited settings. Specifically, we propose a query-parallel encoder which uses a query-segmented attention mechanism to isolate the semantics of queries and model the query-context interaction with a unidirectional flow. This allows for easier generalization to new entity types or transfer to new domains. After obtaining the query and context representations through the encoder, they are fed into a query-conditioned biaffine predictor to extract multiple entities at once. The model is trained with parameter-efficient tuning technique, making it more data-efficient. We conduct extensive experiments and demonstrate that our model performs competitively against strong baseline methods in resource-rich settings, and achieves state-of-the-art results in low-resource settings, including training-from-scratch, in-domain transfer and cross-domain transfer tasks.",
}
| Named entity recognition (NER) is a fundamental task in natural language processing. Recently, NER has been formulated as a machine reading comprehension (MRC) task, in which manually-crafted queries are used to extract entities of different types. However, current MRC-based NER techniques are limited to extracting a single type of entities at a time and are largely geared towards resource-rich settings. This renders them inefficient during the inference phase, while also leaving their potential untapped for utilization in low-resource settings. We suggest a query-parallel MRC-based approach to address these issues, which is capable of extracting multiple entity types concurrently and is applicable to both resource-rich and resource-limited settings. Specifically, we propose a query-parallel encoder which uses a query-segmented attention mechanism to isolate the semantics of queries and model the query-context interaction with a unidirectional flow. This allows for easier generalization to new entity types or transfer to new domains. After obtaining the query and context representations through the encoder, they are fed into a query-conditioned biaffine predictor to extract multiple entities at once. The model is trained with parameter-efficient tuning technique, making it more data-efficient. We conduct extensive experiments and demonstrate that our model performs competitively against strong baseline methods in resource-rich settings, and achieves state-of-the-art results in low-resource settings, including training-from-scratch, in-domain transfer and cross-domain transfer tasks. | [
"Zhang, Yuhao",
"Wang, Yongliang"
] | A Query-Parallel Machine Reading Comprehension Framework for Low-resource NER | findings-emnlp.135 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.136.bib | https://aclanthology.org/2023.findings-emnlp.136/ | @inproceedings{he-tang-2023-bispn,
title = "{B}i{SPN}: Generating Entity Set and Relation Set Coherently in One Pass",
author = "He, Yuxin and
Tang, Buzhou",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.136",
doi = "10.18653/v1/2023.findings-emnlp.136",
pages = "2066--2077",
abstract = "By modeling the interaction among instances and avoiding error propagation, Set Prediction Networks (SPNs) achieve state-of-the-art performance on the tasks of named entity recognition and relation triple extraction respectively. However, how to jointly extract entities and relation triples via SPNs remains an unexplored problem, where the main challenge is the maintenance of coherence between the predicted entity/relation sets during one-pass generation. In this work, we present Bipartite Set Prediction Network (BiSPN), a novel joint entity-relation extraction model that can efficiently generate entity set and relation set in parallel. To overcome the challenge of coherence, BiSPN is equipped with a novel bipartite consistency loss as well as an entity-relation linking loss during training. Experiments on three biomedical/clinical datasets and a general-domain dataset show that BiSPN achieves new state of the art in knowledge-intensive scene and performs competitively in general-domain, while being more efficient than two-stage joint extraction methods.",
}
| By modeling the interaction among instances and avoiding error propagation, Set Prediction Networks (SPNs) achieve state-of-the-art performance on the tasks of named entity recognition and relation triple extraction respectively. However, how to jointly extract entities and relation triples via SPNs remains an unexplored problem, where the main challenge is the maintenance of coherence between the predicted entity/relation sets during one-pass generation. In this work, we present Bipartite Set Prediction Network (BiSPN), a novel joint entity-relation extraction model that can efficiently generate entity set and relation set in parallel. To overcome the challenge of coherence, BiSPN is equipped with a novel bipartite consistency loss as well as an entity-relation linking loss during training. Experiments on three biomedical/clinical datasets and a general-domain dataset show that BiSPN achieves new state of the art in knowledge-intensive scene and performs competitively in general-domain, while being more efficient than two-stage joint extraction methods. | [
"He, Yuxin",
"Tang, Buzhou"
] | BiSPN: Generating Entity Set and Relation Set Coherently in One Pass | findings-emnlp.136 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.137.bib | https://aclanthology.org/2023.findings-emnlp.137/ | @inproceedings{ferron-etal-2023-meep,
title = "{MEEP}: Is this Engaging? Prompting Large Language Models for Dialogue Evaluation in Multilingual Settings",
author = "Ferron, Amila and
Shore, Amber and
Mitra, Ekata and
Agrawal, Ameeta",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.137",
doi = "10.18653/v1/2023.findings-emnlp.137",
pages = "2078--2100",
abstract = "As dialogue systems become more popular, evaluation of their response quality gains importance. Engagingness highly correlates with overall quality and creates a sense of connection that gives human participants a more fulfilling experience. Although qualities like coherence and fluency are readily measured with well-worn automatic metrics, evaluating engagingness often relies on human assessment, which is a costly and time-consuming process. Existing automatic engagingness metrics evaluate the response without the conversation history, are designed for one dataset, or have limited correlation with human annotations. Furthermore, they have been tested exclusively on English conversations. Given that dialogue systems are increasingly available in languages beyond English, multilingual evaluation capabilities are essential. We propose that large language models (LLMs) may be used for evaluation of engagingness in dialogue through prompting, and ask how prompt constructs and translated prompts compare in a multilingual setting. We provide a prompt-design taxonomy for engagingness and find that using selected prompt elements with LLMs, including our comprehensive definition of engagingness, outperforms state-of-the-art methods on evaluation of engagingness in dialogue across multiple languages.",
}
| As dialogue systems become more popular, evaluation of their response quality gains importance. Engagingness highly correlates with overall quality and creates a sense of connection that gives human participants a more fulfilling experience. Although qualities like coherence and fluency are readily measured with well-worn automatic metrics, evaluating engagingness often relies on human assessment, which is a costly and time-consuming process. Existing automatic engagingness metrics evaluate the response without the conversation history, are designed for one dataset, or have limited correlation with human annotations. Furthermore, they have been tested exclusively on English conversations. Given that dialogue systems are increasingly available in languages beyond English, multilingual evaluation capabilities are essential. We propose that large language models (LLMs) may be used for evaluation of engagingness in dialogue through prompting, and ask how prompt constructs and translated prompts compare in a multilingual setting. We provide a prompt-design taxonomy for engagingness and find that using selected prompt elements with LLMs, including our comprehensive definition of engagingness, outperforms state-of-the-art methods on evaluation of engagingness in dialogue across multiple languages. | [
"Ferron, Amila",
"Shore, Amber",
"Mitra, Ekata",
"Agrawal, Ameeta"
] | MEEP: Is this Engaging? Prompting Large Language Models for Dialogue Evaluation in Multilingual Settings | findings-emnlp.137 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.138.bib | https://aclanthology.org/2023.findings-emnlp.138/ | @inproceedings{choe-etal-2023-exploring,
title = "Exploring the Impact of Corpus Diversity on Financial Pretrained Language Models",
author = "Choe, Jaeyoung and
Noh, Keonwoong and
Kim, Nayeon and
Ahn, Seyun and
Jung, Woohwan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.138",
doi = "10.18653/v1/2023.findings-emnlp.138",
pages = "2101--2112",
abstract = "Over the past few years, various domain-specific pretrained language models (PLMs) have been proposed and have outperformed general-domain PLMs in specialized areas such as biomedical, scientific, and clinical domains. In addition, financial PLMs have been studied because of the high economic impact of financial data analysis. However, we found that financial PLMs were not pretrained on sufficiently diverse financial data. This lack of diverse training data leads to a subpar generalization performance, resulting in general-purpose PLMs, including BERT, often outperforming financial PLMs on many downstream tasks. To address this issue, we collected a broad range of financial corpus and trained the Financial Language Model (FiLM) on these diverse datasets. Our experimental results confirm that FiLM outperforms not only existing financial PLMs but also general domain PLMs. Furthermore, we provide empirical evidence that this improvement can be achieved even for unseen corpus groups.",
}
| Over the past few years, various domain-specific pretrained language models (PLMs) have been proposed and have outperformed general-domain PLMs in specialized areas such as biomedical, scientific, and clinical domains. In addition, financial PLMs have been studied because of the high economic impact of financial data analysis. However, we found that financial PLMs were not pretrained on sufficiently diverse financial data. This lack of diverse training data leads to a subpar generalization performance, resulting in general-purpose PLMs, including BERT, often outperforming financial PLMs on many downstream tasks. To address this issue, we collected a broad range of financial corpus and trained the Financial Language Model (FiLM) on these diverse datasets. Our experimental results confirm that FiLM outperforms not only existing financial PLMs but also general domain PLMs. Furthermore, we provide empirical evidence that this improvement can be achieved even for unseen corpus groups. | [
"Choe, Jaeyoung",
"Noh, Keonwoong",
"Kim, Nayeon",
"Ahn, Seyun",
"Jung, Woohwan"
] | Exploring the Impact of Corpus Diversity on Financial Pretrained Language Models | findings-emnlp.138 | 2310.13312 | [
"https://github.com/deep-over/film"
] | https://huggingface.co/papers/2310.13312 | 0 | 0 | 0 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.139.bib | https://aclanthology.org/2023.findings-emnlp.139/ | @inproceedings{wu-etal-2023-llmdet,
title = "{LLMD}et: A Third Party Large Language Models Generated Text Detection Tool",
author = "Wu, Kangxi and
Pang, Liang and
Shen, Huawei and
Cheng, Xueqi and
Chua, Tat-Seng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.139",
doi = "10.18653/v1/2023.findings-emnlp.139",
pages = "2113--2133",
abstract = "Generated texts from large language models (LLMs) are remarkably close to high-quality human-authored text, raising concerns about their potential misuse in spreading false information and academic misconduct. Consequently, there is an urgent need for a highly practical detection tool capable of accurately identifying the source of a given text. However, existing detection tools typically rely on access to LLMs and can only differentiate between machine-generated and human-authored text, failing to meet the requirements of fine-grained tracing, intermediary judgment, and rapid detection. Therefore, we propose LLMDet, a model-specific, secure, efficient, and extendable detection tool, that can source text from specific LLMs, such as GPT-2, OPT, LLaMA, and others. In LLMDet, we record the next-token probabilities of salient n-grams as features to calculate proxy perplexity for each LLM. By jointly analyzing the proxy perplexities of LLMs, we can determine the source of the generated text. Experimental results show that LLMDet yields impressive detection performance while ensuring speed and security, achieving 98.54{\%} precision and about $\times 5.0$ faster for recognizing human-authored text. Additionally, LLMDet can effortlessly extend its detection capabilities to a new open-source model. We will provide an open-source tool at \url{https://github.com/TrustedLLM/LLMDet}.",
}
| Generated texts from large language models (LLMs) are remarkably close to high-quality human-authored text, raising concerns about their potential misuse in spreading false information and academic misconduct. Consequently, there is an urgent need for a highly practical detection tool capable of accurately identifying the source of a given text. However, existing detection tools typically rely on access to LLMs and can only differentiate between machine-generated and human-authored text, failing to meet the requirements of fine-grained tracing, intermediary judgment, and rapid detection. Therefore, we propose LLMDet, a model-specific, secure, efficient, and extendable detection tool, that can source text from specific LLMs, such as GPT-2, OPT, LLaMA, and others. In LLMDet, we record the next-token probabilities of salient n-grams as features to calculate proxy perplexity for each LLM. By jointly analyzing the proxy perplexities of LLMs, we can determine the source of the generated text. Experimental results show that LLMDet yields impressive detection performance while ensuring speed and security, achieving 98.54{\%} precision and about $\times 5.0$ faster for recognizing human-authored text. Additionally, LLMDet can effortlessly extend its detection capabilities to a new open-source model. We will provide an open-source tool at \url{https://github.com/TrustedLLM/LLMDet}. | [
"Wu, Kangxi",
"Pang, Liang",
"Shen, Huawei",
"Cheng, Xueqi",
"Chua, Tat-Seng"
] | LLMDet: A Third Party Large Language Models Generated Text Detection Tool | findings-emnlp.139 | 2305.15004 | [
"https://github.com/trustedllm/llmdet"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.140.bib | https://aclanthology.org/2023.findings-emnlp.140/ | @inproceedings{hou-etal-2023-recap,
title = "{RECAP}: Towards Precise Radiology Report Generation via Dynamic Disease Progression Reasoning",
author = "Hou, Wenjun and
Cheng, Yi and
Xu, Kaishuai and
Li, Wenjie and
Liu, Jiang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.140",
doi = "10.18653/v1/2023.findings-emnlp.140",
pages = "2134--2147",
abstract = "Automating radiology report generation can significantly alleviate radiologists{'} workloads. Previous research has primarily focused on realizing highly concise observations while neglecting the precise attributes that determine the severity of diseases (e.g., small pleural effusion). Since incorrect attributes will lead to imprecise radiology reports, strengthening the generation process with precise attribute modeling becomes necessary. Additionally, the temporal information contained in the historical records, which is crucial in evaluating a patient{'}s current condition (e.g., heart size is unchanged), has also been largely disregarded. To address these issues, we propose RECAP, which generates precise and accurate radiology reports via dynamic disease progression reasoning. Specifically, RECAP first predicts the observations and progressions (i.e., spatiotemporal information) given two consecutive radiographs. It then combines the historical records, spatiotemporal information, and radiographs for report generation, where a disease progression graph and dynamic progression reasoning mechanism are devised to accurately select the attributes of each observation and progression. Extensive experiments on two publicly available datasets demonstrate the effectiveness of our model.",
}
| Automating radiology report generation can significantly alleviate radiologists{'} workloads. Previous research has primarily focused on realizing highly concise observations while neglecting the precise attributes that determine the severity of diseases (e.g., small pleural effusion). Since incorrect attributes will lead to imprecise radiology reports, strengthening the generation process with precise attribute modeling becomes necessary. Additionally, the temporal information contained in the historical records, which is crucial in evaluating a patient{'}s current condition (e.g., heart size is unchanged), has also been largely disregarded. To address these issues, we propose RECAP, which generates precise and accurate radiology reports via dynamic disease progression reasoning. Specifically, RECAP first predicts the observations and progressions (i.e., spatiotemporal information) given two consecutive radiographs. It then combines the historical records, spatiotemporal information, and radiographs for report generation, where a disease progression graph and dynamic progression reasoning mechanism are devised to accurately select the attributes of each observation and progression. Extensive experiments on two publicly available datasets demonstrate the effectiveness of our model. | [
"Hou, Wenjun",
"Cheng, Yi",
"Xu, Kaishuai",
"Li, Wenjie",
"Liu, Jiang"
] | RECAP: Towards Precise Radiology Report Generation via Dynamic Disease Progression Reasoning | findings-emnlp.140 | 2310.13864 | [
"https://github.com/wjhou/recap"
] | https://huggingface.co/papers/2310.13864 | 1 | 1 | 0 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.141.bib | https://aclanthology.org/2023.findings-emnlp.141/ | @inproceedings{liu-etal-2023-causal,
title = "Causal Intervention for Abstractive Related Work Generation",
author = "Liu, Jiachang and
Zhang, Qi and
Shi, Chongyang and
Naseem, Usman and
Wang, Shoujin and
Hu, Liang and
Tsang, Ivor",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.141",
doi = "10.18653/v1/2023.findings-emnlp.141",
pages = "2148--2159",
abstract = "Abstractive related work generation has attracted increasing attention in generating coherent related work that helps readers grasp the current research. However, most existing models ignore the inherent causality during related work generation, leading to spurious correlations which downgrade the models{'} generation quality and generalizability. In this study, we argue that causal intervention can address such limitations and improve the quality and coherence of generated related work. To this end, we propose a novel Causal Intervention Module for Related Work Generation (CaM) to effectively capture causalities in the generation process. Specifically, we first model the relations among the sentence order, document (reference) correlations, and transitional content in related work generation using a causal graph. Then, to implement causal interventions and mitigate the negative impact of spurious correlations, we use do-calculus to derive ordinary conditional probabilities and identify causal effects through CaM. Finally, we subtly fuse CaM with Transformer to obtain an end-to-end related work generation framework. Extensive experiments on two real-world datasets show that CaM can effectively promote the model to learn causal relations and thus produce related work of higher quality and coherence.",
}
| Abstractive related work generation has attracted increasing attention in generating coherent related work that helps readers grasp the current research. However, most existing models ignore the inherent causality during related work generation, leading to spurious correlations which downgrade the models{'} generation quality and generalizability. In this study, we argue that causal intervention can address such limitations and improve the quality and coherence of generated related work. To this end, we propose a novel Causal Intervention Module for Related Work Generation (CaM) to effectively capture causalities in the generation process. Specifically, we first model the relations among the sentence order, document (reference) correlations, and transitional content in related work generation using a causal graph. Then, to implement causal interventions and mitigate the negative impact of spurious correlations, we use do-calculus to derive ordinary conditional probabilities and identify causal effects through CaM. Finally, we subtly fuse CaM with Transformer to obtain an end-to-end related work generation framework. Extensive experiments on two real-world datasets show that CaM can effectively promote the model to learn causal relations and thus produce related work of higher quality and coherence. | [
"Liu, Jiachang",
"Zhang, Qi",
"Shi, Chongyang",
"Naseem, Usman",
"Wang, Shoujin",
"Hu, Liang",
"Tsang, Ivor"
] | Causal Intervention for Abstractive Related Work Generation | findings-emnlp.141 | 2305.13685 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.142.bib | https://aclanthology.org/2023.findings-emnlp.142/ | @inproceedings{zhang-etal-2023-g,
title = "{G}-{SPEED}: General {SP}arse Efficient Editing {M}o{D}el",
author = "Zhang, Haoke and
Wang, Yue and
Li, Juntao and
Zhou, Xiabing and
Zhang, Min",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.142",
doi = "10.18653/v1/2023.findings-emnlp.142",
pages = "2160--2175",
abstract = "Large Language Models (LLMs) have demonstrated incredible capabilities in understanding, generating, and manipulating languages. Through human-model interactions, LLMs can automatically understand human-issued instructions and output the expected contents, which can significantly increase working efficiency. In various types of real-world demands, editing-oriented tasks account for a considerable proportion, which involves an interactive process that entails the continuous refinement of existing texts to meet specific criteria. Due to the need for multi-round human-model interaction and the generation of complicated editing tasks, there is an emergent need for efficient general editing models. In this paper, we propose \textbf{G}eneral \textbf{SP}arse \textbf{E}fficient \textbf{E}diting Mo\textbf{D}el (\textbf{G-SPEED}), which can fulfill diverse editing requirements through a single model while maintaining low computational costs. Specifically, we first propose a novel unsupervised text editing data clustering algorithm to deal with the data scarcity problem. Subsequently, we introduce a sparse editing model architecture to mitigate the inherently limited learning capabilities of small language models. The experimental outcomes indicate that G-SPEED, with its 508M parameters, can surpass LLMs equipped with 175B parameters. Our code and model checkpoints are available at \url{https://github.com/Banner-Z/G-SPEED}.",
}
| Large Language Models (LLMs) have demonstrated incredible capabilities in understanding, generating, and manipulating languages. Through human-model interactions, LLMs can automatically understand human-issued instructions and output the expected contents, which can significantly increase working efficiency. In various types of real-world demands, editing-oriented tasks account for a considerable proportion, which involves an interactive process that entails the continuous refinement of existing texts to meet specific criteria. Due to the need for multi-round human-model interaction and the generation of complicated editing tasks, there is an emergent need for efficient general editing models. In this paper, we propose \textbf{G}eneral \textbf{SP}arse \textbf{E}fficient \textbf{E}diting Mo\textbf{D}el (\textbf{G-SPEED}), which can fulfill diverse editing requirements through a single model while maintaining low computational costs. Specifically, we first propose a novel unsupervised text editing data clustering algorithm to deal with the data scarcity problem. Subsequently, we introduce a sparse editing model architecture to mitigate the inherently limited learning capabilities of small language models. The experimental outcomes indicate that G-SPEED, with its 508M parameters, can surpass LLMs equipped with 175B parameters. Our code and model checkpoints are available at \url{https://github.com/Banner-Z/G-SPEED}. | [
"Zhang, Haoke",
"Wang, Yue",
"Li, Juntao",
"Zhou, Xiabing",
"Zhang, Min"
] | G-SPEED: General SParse Efficient Editing MoDel | findings-emnlp.142 | 2310.10480 | [
"https://github.com/banner-z/g-speed"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.143.bib | https://aclanthology.org/2023.findings-emnlp.143/ | @inproceedings{deng-etal-2023-attack,
title = "Attack Prompt Generation for Red Teaming and Defending Large Language Models",
author = "Deng, Boyi and
Wang, Wenjie and
Feng, Fuli and
Deng, Yang and
Wang, Qifan and
He, Xiangnan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.143",
doi = "10.18653/v1/2023.findings-emnlp.143",
pages = "2176--2189",
abstract = "Large language models (LLMs) are susceptible to red teaming attacks, which can induce LLMs to generate harmful content. Previous research constructs attack prompts via manual or automatic methods, which have their own limitations on construction cost and quality. To address these issues, we propose an integrated approach that combines manual and automatic methods to economically generate high-quality attack prompts. Specifically, considering the impressive capabilities of newly emerged LLMs, we propose an attack framework to instruct LLMs to mimic human-generated prompts through in-context learning. Furthermore, we propose a defense framework that fine-tunes victim LLMs through iterative interactions with the attack framework to enhance their safety against red teaming attacks. Extensive experiments on different LLMs validate the effectiveness of our proposed attack and defense frameworks. Additionally, we release a series of attack prompts datasets named SAP with varying sizes, facilitating the safety evaluation and enhancement of more LLMs.",
}
| Large language models (LLMs) are susceptible to red teaming attacks, which can induce LLMs to generate harmful content. Previous research constructs attack prompts via manual or automatic methods, which have their own limitations on construction cost and quality. To address these issues, we propose an integrated approach that combines manual and automatic methods to economically generate high-quality attack prompts. Specifically, considering the impressive capabilities of newly emerged LLMs, we propose an attack framework to instruct LLMs to mimic human-generated prompts through in-context learning. Furthermore, we propose a defense framework that fine-tunes victim LLMs through iterative interactions with the attack framework to enhance their safety against red teaming attacks. Extensive experiments on different LLMs validate the effectiveness of our proposed attack and defense frameworks. Additionally, we release a series of attack prompts datasets named SAP with varying sizes, facilitating the safety evaluation and enhancement of more LLMs. | [
"Deng, Boyi",
"Wang, Wenjie",
"Feng, Fuli",
"Deng, Yang",
"Wang, Qifan",
"He, Xiangnan"
] | Attack Prompt Generation for Red Teaming and Defending Large Language Models | findings-emnlp.143 | 2310.12505 | [
"https://github.com/aatrox103/sap"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.144.bib | https://aclanthology.org/2023.findings-emnlp.144/ | @inproceedings{chen-etal-2023-smart,
title = "Smart {``}Chef{''}: Verifying the Effect of Role-based Paraphrasing for Aspect Term Extraction",
author = "Chen, Jiaxiang and
Hong, Yu and
Xu, Qingting and
Yao, Jianmin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.144",
doi = "10.18653/v1/2023.findings-emnlp.144",
pages = "2190--2197",
abstract = "We tackle Aspect Term Extraction (ATE), a task of automatically extracting aspect terms from sentences. The current Pretrained Language Model (PLM) based extractors have achieved significant improvements. They primarily benefit from context-aware encoding. However, a considerable number of sentences in ATE corpora contain uninformative or low-quality contexts. Such sentences frequently act as {``}troublemakers{''} during test. In this study, we explore the context-oriented quality improvement method. Specifically, we propose to automatically rewrite the sentences from the perspectives of virtual experts with different roles, such as a {``}chef{''} in the restaurant domain. On this basis, we perform ATE over the paraphrased sentences during test, using the well-trained extractors without any change. In the experiments, we leverage ChatGPT to determine virtual experts in the considered domains, and induce ChatGPT to generate paraphrases conditioned on the roles of virtual experts. We experiment on the benchmark SemEval datasets, including Laptop-domain L14 and Restaurant-domain R14-16. The experimental results show that our approach effectively recalls the inconspicuous aspect terms like {``}al di la{''}, although it reduces the precision. In addition, it is proven that our approach can be substantially improved by redundancy elimination and multi-role voting. More importantly, our approach can be used to expand the predictions obtained on the original sentences. This yields state-of-the-art performance (i.e., F1-scores of 86.2{\%}, 89.3{\%}, 77.7{\%}, 82.7{\%} on L14 and R14-16) without retraining or fine-tuning the baseline extractors.",
}
| We tackle Aspect Term Extraction (ATE), a task of automatically extracting aspect terms from sentences. The current Pretrained Language Model (PLM) based extractors have achieved significant improvements. They primarily benefit from context-aware encoding. However, a considerable number of sentences in ATE corpora contain uninformative or low-quality contexts. Such sentences frequently act as {``}troublemakers{''} during test. In this study, we explore the context-oriented quality improvement method. Specifically, we propose to automatically rewrite the sentences from the perspectives of virtual experts with different roles, such as a {``}chef{''} in the restaurant domain. On this basis, we perform ATE over the paraphrased sentences during test, using the well-trained extractors without any change. In the experiments, we leverage ChatGPT to determine virtual experts in the considered domains, and induce ChatGPT to generate paraphrases conditioned on the roles of virtual experts. We experiment on the benchmark SemEval datasets, including Laptop-domain L14 and Restaurant-domain R14-16. The experimental results show that our approach effectively recalls the inconspicuous aspect terms like {``}al di la{''}, although it reduces the precision. In addition, it is proven that our approach can be substantially improved by redundancy elimination and multi-role voting. More importantly, our approach can be used to expand the predictions obtained on the original sentences. This yields state-of-the-art performance (i.e., F1-scores of 86.2{\%}, 89.3{\%}, 77.7{\%}, 82.7{\%} on L14 and R14-16) without retraining or fine-tuning the baseline extractors. | [
"Chen, Jiaxiang",
"Hong, Yu",
"Xu, Qingting",
"Yao, Jianmin"
] | Smart “Chef”: Verifying the Effect of Role-based Paraphrasing for Aspect Term Extraction | findings-emnlp.144 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.145.bib | https://aclanthology.org/2023.findings-emnlp.145/ | @inproceedings{lyu-etal-2023-multi,
title = "Multi-Defendant Legal Judgment Prediction via Hierarchical Reasoning",
author = "Lyu, Yougang and
Hao, Jitai and
Wang, Zihan and
Zhao, Kai and
Gao, Shen and
Ren, Pengjie and
Chen, Zhumin and
Wang, Fang and
Ren, Zhaochun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.145",
doi = "10.18653/v1/2023.findings-emnlp.145",
pages = "2198--2209",
abstract = "Multiple defendants in a criminal fact description generally exhibit complex interactions, and cannot be well handled by existing Legal Judgment Prediction (LJP) methods which focus on predicting judgment results (e.g., law articles, charges, and terms of penalty) for single-defendant cases. To address this problem, we propose the task of multi-defendant LJP, which aims to automatically predict the judgment results for each defendant of multi-defendant cases. Two challenges arise with the task of multi-defendant LJP: (1) indistinguishable judgment results among various defendants; and (2) the lack of a real-world dataset for training and evaluation. To tackle the first challenge, we formalize the multi-defendant judgment process as hierarchical reasoning chains and introduce a multi-defendant LJP method, named Hierarchical Reasoning Network (HRN), which follows the hierarchical reasoning chains to determine criminal relationships, sentencing circumstances, law articles, charges, and terms of penalty for each defendant. To tackle the second challenge, we collect a real-world multi-defendant LJP dataset, namely MultiLJP, to accelerate the relevant research in the future. Extensive experiments on MultiLJP verify the effectiveness of our proposed HRN.",
}
| Multiple defendants in a criminal fact description generally exhibit complex interactions, and cannot be well handled by existing Legal Judgment Prediction (LJP) methods which focus on predicting judgment results (e.g., law articles, charges, and terms of penalty) for single-defendant cases. To address this problem, we propose the task of multi-defendant LJP, which aims to automatically predict the judgment results for each defendant of multi-defendant cases. Two challenges arise with the task of multi-defendant LJP: (1) indistinguishable judgment results among various defendants; and (2) the lack of a real-world dataset for training and evaluation. To tackle the first challenge, we formalize the multi-defendant judgment process as hierarchical reasoning chains and introduce a multi-defendant LJP method, named Hierarchical Reasoning Network (HRN), which follows the hierarchical reasoning chains to determine criminal relationships, sentencing circumstances, law articles, charges, and terms of penalty for each defendant. To tackle the second challenge, we collect a real-world multi-defendant LJP dataset, namely MultiLJP, to accelerate the relevant research in the future. Extensive experiments on MultiLJP verify the effectiveness of our proposed HRN. | [
"Lyu, Yougang",
"Hao, Jitai",
"Wang, Zihan",
"Zhao, Kai",
"Gao, Shen",
"Ren, Pengjie",
"Chen, Zhumin",
"Wang, Fang",
"Ren, Zhaochun"
] | Multi-Defendant Legal Judgment Prediction via Hierarchical Reasoning | findings-emnlp.145 | 2312.05762 | [
"https://github.com/currentf/hrn"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.146.bib | https://aclanthology.org/2023.findings-emnlp.146/ | @inproceedings{wang-etal-2023-interpreting,
title = "Interpreting Indirect Answers to Yes-No Questions in Multiple Languages",
author = "Wang, Zijie and
Hossain, Md and
Mathur, Shivam and
Melo, Terry and
Ozler, Kadir and
Park, Keun and
Quintero, Jacob and
Rezaei, MohammadHossein and
Shakya, Shreya and
Uddin, Md and
Blanco, Eduardo",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.146",
doi = "10.18653/v1/2023.findings-emnlp.146",
pages = "2210--2227",
abstract = "Yes-no questions expect a yes or no for an answer, but people often skip polar keywords. Instead, they answer with long explanations that must be interpreted. In this paper, we focus on this challenging problem and release new benchmarks in eight languages. We present a distant supervision approach to collect training data, and demonstrate that direct answers (i.e., with polar keywords) are useful to train models to interpret indirect answers (i.e., without polar keywords). We show that monolingual fine-tuning is beneficial if training data can be obtained via distant supervision for the language of interest (5 languages). Additionally, we show that cross-lingual fine-tuning is always beneficial (8 languages).",
}
| Yes-no questions expect a yes or no for an answer, but people often skip polar keywords. Instead, they answer with long explanations that must be interpreted. In this paper, we focus on this challenging problem and release new benchmarks in eight languages. We present a distant supervision approach to collect training data, and demonstrate that direct answers (i.e., with polar keywords) are useful to train models to interpret indirect answers (i.e., without polar keywords). We show that monolingual fine-tuning is beneficial if training data can be obtained via distant supervision for the language of interest (5 languages). Additionally, we show that cross-lingual fine-tuning is always beneficial (8 languages). | [
"Wang, Zijie",
"Hossain, Md",
"Mathur, Shivam",
"Melo, Terry",
"Ozler, Kadir",
"Park, Keun",
"Quintero, Jacob",
"Rezaei, MohammadHossein",
"Shakya, Shreya",
"Uddin, Md",
"Blanco, Eduardo"
] | Interpreting Indirect Answers to Yes-No Questions in Multiple Languages | findings-emnlp.146 | 2310.13290 | [
"https://github.com/wang-zijie/yn-question-multilingual"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.147.bib | https://aclanthology.org/2023.findings-emnlp.147/ | @inproceedings{wang-etal-2023-generalizing,
title = "Generalizing Few-Shot Named Entity Recognizers to Unseen Domains with Type-Related Features",
author = "Wang, Zihan and
Zhao, Ziqi and
Chen, Zhumin and
Ren, Pengjie and
de Rijke, Maarten and
Ren, Zhaochun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.147",
doi = "10.18653/v1/2023.findings-emnlp.147",
pages = "2228--2240",
abstract = "Few-shot named entity recognition (NER) has shown remarkable progress in identifying entities in low-resource domains. However, few-shot NER methods still struggle with out-of-domain (OOD) examples due to their reliance on manual labeling for the target domain. To address this limitation, recent studies enable generalization to an unseen target domain with only a few labeled examples using data augmentation techniques. Two important challenges remain: First, augmentation is limited to the training data, resulting in minimal overlap between the generated data and OOD examples. Second, knowledge transfer is implicit and insufficient, severely hindering model generalizability and the integration of knowledge from the source domain. In this paper, we propose a framework, prompt learning with type-related features (PLTR), to address these challenges. To identify useful knowledge in the source domain and enhance knowledge transfer, PLTR automatically extracts entity type-related features (TRFs) based on mutual information criteria. To bridge the gap between training and OOD data, PLTR generates a unique prompt for each unseen example by selecting relevant TRFs. We show that PLTR achieves significant performance improvements on in-domain and cross-domain datasets. The use of PLTR facilitates model adaptation and increases representation similarities between the source and unseen domains.",
}
| Few-shot named entity recognition (NER) has shown remarkable progress in identifying entities in low-resource domains. However, few-shot NER methods still struggle with out-of-domain (OOD) examples due to their reliance on manual labeling for the target domain. To address this limitation, recent studies enable generalization to an unseen target domain with only a few labeled examples using data augmentation techniques. Two important challenges remain: First, augmentation is limited to the training data, resulting in minimal overlap between the generated data and OOD examples. Second, knowledge transfer is implicit and insufficient, severely hindering model generalizability and the integration of knowledge from the source domain. In this paper, we propose a framework, prompt learning with type-related features (PLTR), to address these challenges. To identify useful knowledge in the source domain and enhance knowledge transfer, PLTR automatically extracts entity type-related features (TRFs) based on mutual information criteria. To bridge the gap between training and OOD data, PLTR generates a unique prompt for each unseen example by selecting relevant TRFs. We show that PLTR achieves significant performance improvements on in-domain and cross-domain datasets. The use of PLTR facilitates model adaptation and increases representation similarities between the source and unseen domains. | [
"Wang, Zihan",
"Zhao, Ziqi",
"Chen, Zhumin",
"Ren, Pengjie",
"de Rijke, Maarten",
"Ren, Zhaochun"
] | Generalizing Few-Shot Named Entity Recognizers to Unseen Domains with Type-Related Features | findings-emnlp.147 | 2310.09846 | [
"https://github.com/wzh-nlp/pltr"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.148.bib | https://aclanthology.org/2023.findings-emnlp.148/ | @inproceedings{han-etal-2023-intervention,
title = "Intervention-Based Alignment of Code Search with Execution Feedback",
author = "Han, Hojae and
Kim, Minsoo and
Hwang, Seung-won and
Duan, Nan and
Lu, Shuai",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.148",
doi = "10.18653/v1/2023.findings-emnlp.148",
pages = "2241--2263",
abstract = "One of the fundamental goals in code search is to retrieve a functionally correct code for a given natural language query. As annotating for correctness requires executing test cases (i.e. obtaining execution feedback), existing code search training datasets approximate text-code co-occurrences as positive execution feedback. However, this approximation may misalign models{'} retrieval decisions from ground-truth correctness. To address such limitation, we propose Code Intervention-based Reinforcement Learning (CIRL) that perturbs training code to result in misalignment (i.e. code intervention), then tests models{'} decisions and corrects them with the execution feedback by reinforcement learning. The first technical contribution of CIRL is to induce the execution feedback from perturbation, without actual execution. Secondly, CIRL introduces structural perturbations using abstract syntax trees, going beyond simple lexical changes. Experimental results on various datasets demonstrate the effectiveness of CIRL compared to conventional approaches.",
}
| One of the fundamental goals in code search is to retrieve a functionally correct code for a given natural language query. As annotating for correctness requires executing test cases (i.e. obtaining execution feedback), existing code search training datasets approximate text-code co-occurrences as positive execution feedback. However, this approximation may misalign models{'} retrieval decisions from ground-truth correctness. To address such limitation, we propose Code Intervention-based Reinforcement Learning (CIRL) that perturbs training code to result in misalignment (i.e. code intervention), then tests models{'} decisions and corrects them with the execution feedback by reinforcement learning. The first technical contribution of CIRL is to induce the execution feedback from perturbation, without actual execution. Secondly, CIRL introduces structural perturbations using abstract syntax trees, going beyond simple lexical changes. Experimental results on various datasets demonstrate the effectiveness of CIRL compared to conventional approaches. | [
"Han, Hojae",
"Kim, Minsoo",
"Hwang, Seung-won",
"Duan, Nan",
"Lu, Shuai"
] | Intervention-Based Alignment of Code Search with Execution Feedback | findings-emnlp.148 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.149.bib | https://aclanthology.org/2023.findings-emnlp.149/ | @inproceedings{huang-etal-2023-enhancing,
title = "Enhancing Neural Machine Translation with Semantic Units",
author = "Huang, Langlin and
Gu, Shuhao and
Zhuocheng, Zhang and
Feng, Yang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.149",
doi = "10.18653/v1/2023.findings-emnlp.149",
pages = "2264--2277",
abstract = "Conventional neural machine translation (NMT) models typically use subwords and words as the basic units for model input and comprehension. However, complete words and phrases composed of several tokens are often the fundamental units for expressing semantics, referred to as semantic units. To address this issue, we propose a method Semantic Units for Machine Translation (SU4MT) which models the integral meanings of semantic units within a sentence, and then leverages them to provide a new perspective for understanding the sentence. Specifically, we first propose Word Pair Encoding (WPE), a phrase extraction method to help identify the boundaries of semantic units. Next, we design an Attentive Semantic Fusion (ASF) layer to integrate the semantics of multiple subwords into a single vector: the semantic unit representation. Lastly, the semantic-unit-level sentence representation is concatenated to the token-level one, and they are combined as the input of encoder. Experimental results demonstrate that our method effectively models and leverages semantic-unit-level information and outperforms the strong baselines.",
}
| Conventional neural machine translation (NMT) models typically use subwords and words as the basic units for model input and comprehension. However, complete words and phrases composed of several tokens are often the fundamental units for expressing semantics, referred to as semantic units. To address this issue, we propose a method Semantic Units for Machine Translation (SU4MT) which models the integral meanings of semantic units within a sentence, and then leverages them to provide a new perspective for understanding the sentence. Specifically, we first propose Word Pair Encoding (WPE), a phrase extraction method to help identify the boundaries of semantic units. Next, we design an Attentive Semantic Fusion (ASF) layer to integrate the semantics of multiple subwords into a single vector: the semantic unit representation. Lastly, the semantic-unit-level sentence representation is concatenated to the token-level one, and they are combined as the input of encoder. Experimental results demonstrate that our method effectively models and leverages semantic-unit-level information and outperforms the strong baselines. | [
"Huang, Langlin",
"Gu, Shuhao",
"Zhuocheng, Zhang",
"Feng, Yang"
] | Enhancing Neural Machine Translation with Semantic Units | findings-emnlp.149 | 2310.11360 | [
"https://github.com/ictnlp/su4mt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.150.bib | https://aclanthology.org/2023.findings-emnlp.150/ | @inproceedings{kim-lee-2023-draft,
title = "{DRAFT}: Dense Retrieval Augmented Few-shot Topic classifier Framework",
author = "Kim, Keonwoo and
Lee, Younggun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.150",
doi = "10.18653/v1/2023.findings-emnlp.150",
pages = "2278--2294",
abstract = "With the growing volume of diverse information, the demand for classifying arbitrary topics has become increasingly critical. To address this challenge, we introduce DRAFT, a simple framework designed to train a classifier for few-shot topic classification. DRAFT uses a few examples of a specific topic as queries to construct Customized dataset with a dense retriever model. Multi-query retrieval (MQR) algorithm, which effectively handles multiple queries related to a specific topic, is applied to construct the Customized dataset. Subsequently, we fine-tune a classifier using the Customized dataset to identify the topic. To demonstrate the efficacy of our proposed approach, we conduct evaluations on both widely used classification benchmark datasets and manually constructed datasets with 291 diverse topics, which simulate diverse contents encountered in real-world applications. DRAFT shows competitive or superior performance compared to baselines that use in-context learning, such as GPT-3 175B and InstructGPT 175B, on few-shot topic classification tasks despite having 177 times fewer parameters, demonstrating its effectiveness.",
}
| With the growing volume of diverse information, the demand for classifying arbitrary topics has become increasingly critical. To address this challenge, we introduce DRAFT, a simple framework designed to train a classifier for few-shot topic classification. DRAFT uses a few examples of a specific topic as queries to construct Customized dataset with a dense retriever model. Multi-query retrieval (MQR) algorithm, which effectively handles multiple queries related to a specific topic, is applied to construct the Customized dataset. Subsequently, we fine-tune a classifier using the Customized dataset to identify the topic. To demonstrate the efficacy of our proposed approach, we conduct evaluations on both widely used classification benchmark datasets and manually constructed datasets with 291 diverse topics, which simulate diverse contents encountered in real-world applications. DRAFT shows competitive or superior performance compared to baselines that use in-context learning, such as GPT-3 175B and InstructGPT 175B, on few-shot topic classification tasks despite having 177 times fewer parameters, demonstrating its effectiveness. | [
"Kim, Keonwoo",
"Lee, Younggun"
] | DRAFT: Dense Retrieval Augmented Few-shot Topic classifier Framework | findings-emnlp.150 | 2312.02532 | [
"https://github.com/gunny97/DRAFT"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.151.bib | https://aclanthology.org/2023.findings-emnlp.151/ | @inproceedings{akoury-etal-2023-framework,
title = "A Framework for Exploring Player Perceptions of {LLM}-Generated Dialogue in Commercial Video Games",
author = "Akoury, Nader and
Yang, Qian and
Iyyer, Mohit",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.151",
doi = "10.18653/v1/2023.findings-emnlp.151",
pages = "2295--2311",
abstract = "The growing capabilities of large language models (LLMs) have inspired recent efforts to integrate LLM-generated dialogue into video games. However, evaluation remains a major challenge: how do we assess the player experience in a commercial game augmented with LLM-generated dialogue? To explore this question, we introduce a dynamic evaluation framework for the dialogue management systems that govern the task-oriented dialogue often found in roleplaying video games. We first extract dialogue from the widely-acclaimed role-playing game *Disco Elysium: The Final Cut*, which contains 1.1M words of dialogue spread across a complex graph of utterances where node reachability depends on game state (e.g., whether a certain item is held). Using this dataset, we have GPT-4 perform *dialogue infilling* to generate grounded utterances based on game state represented via code. In a statistically robust study of 28 players recruited from the r/DiscoyElysium subreddit, the LLM outputs are evaluated against the game designers{'} writing via both preference judgments and free-form feedback using a web interface that recreates the game{'}s core conversation functionality. Overall, the game designers{'} prose is significantly preferred to GPT-4 generations, with participants citing reasons such as improved logical flow and grounding with the game state. To spur more principled future research in this area, we release our web interface and tools to enable researchers to build upon our work. https://pl.aiwright.dev",
}
| The growing capabilities of large language models (LLMs) have inspired recent efforts to integrate LLM-generated dialogue into video games. However, evaluation remains a major challenge: how do we assess the player experience in a commercial game augmented with LLM-generated dialogue? To explore this question, we introduce a dynamic evaluation framework for the dialogue management systems that govern the task-oriented dialogue often found in roleplaying video games. We first extract dialogue from the widely-acclaimed role-playing game *Disco Elysium: The Final Cut*, which contains 1.1M words of dialogue spread across a complex graph of utterances where node reachability depends on game state (e.g., whether a certain item is held). Using this dataset, we have GPT-4 perform *dialogue infilling* to generate grounded utterances based on game state represented via code. In a statistically robust study of 28 players recruited from the r/DiscoyElysium subreddit, the LLM outputs are evaluated against the game designers{'} writing via both preference judgments and free-form feedback using a web interface that recreates the game{'}s core conversation functionality. Overall, the game designers{'} prose is significantly preferred to GPT-4 generations, with participants citing reasons such as improved logical flow and grounding with the game state. To spur more principled future research in this area, we release our web interface and tools to enable researchers to build upon our work. https://pl.aiwright.dev | [
"Akoury, Nader",
"Yang, Qian",
"Iyyer, Mohit"
] | A Framework for Exploring Player Perceptions of LLM-Generated Dialogue in Commercial Video Games | findings-emnlp.151 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.152.bib | https://aclanthology.org/2023.findings-emnlp.152/ | @inproceedings{jiang-etal-2023-generative,
title = "Generative Calibration for In-context Learning",
author = "Jiang, Zhongtao and
Zhang, Yuanzhe and
Liu, Cao and
Zhao, Jun and
Liu, Kang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.152",
doi = "10.18653/v1/2023.findings-emnlp.152",
pages = "2312--2333",
abstract = "As one of the most exciting features of large language models (LLMs), in-context learning is a mixed blessing. While it allows users to fast-prototype a task solver with only a few training examples, the performance is generally sensitive to various configurations of the prompt such as the choice or order of the training examples. In this paper, we for the first time theoretically and empirically identify that such a paradox is mainly due to the label shift of the in-context model to the data distribution, in which LLMs shift the label marginal $p(y)$ while having a good label conditional $p(x|y)$. With this understanding, we can simply calibrate the in-context predictive distribution by adjusting the label marginal, which is estimated via Monte-Carlo sampling over the in-context model, i.e., generation of LLMs. We call our approach as generative calibration. We conduct exhaustive experiments with 12 text classification tasks and 12 LLMs scaling from 774M to 33B, generally find that the proposed method greatly and consistently outperforms the ICL as well as state-of-the-art calibration methods, by up to 27{\%} absolute in macro-F1. Meanwhile, the proposed method is also stable under different prompt configurations.",
}
| As one of the most exciting features of large language models (LLMs), in-context learning is a mixed blessing. While it allows users to fast-prototype a task solver with only a few training examples, the performance is generally sensitive to various configurations of the prompt such as the choice or order of the training examples. In this paper, we for the first time theoretically and empirically identify that such a paradox is mainly due to the label shift of the in-context model to the data distribution, in which LLMs shift the label marginal $p(y)$ while having a good label conditional $p(x|y)$. With this understanding, we can simply calibrate the in-context predictive distribution by adjusting the label marginal, which is estimated via Monte-Carlo sampling over the in-context model, i.e., generation of LLMs. We call our approach as generative calibration. We conduct exhaustive experiments with 12 text classification tasks and 12 LLMs scaling from 774M to 33B, generally find that the proposed method greatly and consistently outperforms the ICL as well as state-of-the-art calibration methods, by up to 27{\%} absolute in macro-F1. Meanwhile, the proposed method is also stable under different prompt configurations. | [
"Jiang, Zhongtao",
"Zhang, Yuanzhe",
"Liu, Cao",
"Zhao, Jun",
"Liu, Kang"
] | Generative Calibration for In-context Learning | findings-emnlp.152 | 2310.10266 | [
"https://github.com/changmenseng/generative_calibration"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.153.bib | https://aclanthology.org/2023.findings-emnlp.153/ | @inproceedings{ma-etal-2023-chain-thought,
title = "Chain of Thought with Explicit Evidence Reasoning for Few-shot Relation Extraction",
author = "Ma, Xilai and
Li, Jing and
Zhang, Min",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.153",
doi = "10.18653/v1/2023.findings-emnlp.153",
pages = "2334--2352",
abstract = "Few-shot relation extraction involves identifying the type of relationship between two specific entities within a text, using a limited number of annotated samples. A variety of solutions to this problem have emerged by applying meta-learning and neural graph techniques which typically necessitate a training process for adaptation. Recently, the strategy of in-context learning has been demonstrating notable results without the need of training. Few studies have already utilized in-context learning for zero-shot information extraction. Unfortunately, the evidence for inference is either not considered or implicitly modeled during the construction of chain-of-thought prompts. In this paper, we propose a novel approach for few-shot relation extraction using large language models, named CoT-ER, chain-of-thought with explicit evidence reasoning. In particular, CoT-ER first induces large language models to generate evidences using task-specific and concept-level knowledge. Then these evidences are explicitly incorporated into chain-of-thought prompting for relation extraction. Experimental results demonstrate that our CoT-ER approach (with 0{\%} training data) achieves competitive performance compared to the fully-supervised (with 100{\%} training data) state-of-the-art approach on the FewRel1.0 and FewRel2.0 datasets.",
}
| Few-shot relation extraction involves identifying the type of relationship between two specific entities within a text, using a limited number of annotated samples. A variety of solutions to this problem have emerged by applying meta-learning and neural graph techniques which typically necessitate a training process for adaptation. Recently, the strategy of in-context learning has been demonstrating notable results without the need of training. Few studies have already utilized in-context learning for zero-shot information extraction. Unfortunately, the evidence for inference is either not considered or implicitly modeled during the construction of chain-of-thought prompts. In this paper, we propose a novel approach for few-shot relation extraction using large language models, named CoT-ER, chain-of-thought with explicit evidence reasoning. In particular, CoT-ER first induces large language models to generate evidences using task-specific and concept-level knowledge. Then these evidences are explicitly incorporated into chain-of-thought prompting for relation extraction. Experimental results demonstrate that our CoT-ER approach (with 0{\%} training data) achieves competitive performance compared to the fully-supervised (with 100{\%} training data) state-of-the-art approach on the FewRel1.0 and FewRel2.0 datasets. | [
"Ma, Xilai",
"Li, Jing",
"Zhang, Min"
] | Chain of Thought with Explicit Evidence Reasoning for Few-shot Relation Extraction | findings-emnlp.153 | 2311.05922 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.154.bib | https://aclanthology.org/2023.findings-emnlp.154/ | @inproceedings{zeng-etal-2023-adatrans,
title = "{A}da{T}ran{S}: Adapting with Boundary-based Shrinking for End-to-End Speech Translation",
author = "Zeng, Xingshan and
Li, Liangyou and
Liu, Qun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.154",
doi = "10.18653/v1/2023.findings-emnlp.154",
pages = "2353--2361",
abstract = "To alleviate the data scarcity problem in End-to-end speech translation (ST), pre-training on data for speech recognition and machine translation is considered as an important technique. However, the modality gap between speech and text prevents the ST model from efficiently inheriting knowledge from the pre-trained models. In this work, we propose AdaTranS for end-to-end ST. It adapts the speech features with a new shrinking mechanism to mitigate the length mismatch between speech and text features by predicting word boundaries. Experiments on the MUST-C dataset demonstrate that AdaTranS achieves better performance than the other shrinking-based methods, with higher inference speed and lower memory usage. Further experiments also show that AdaTranS can be equipped with additional alignment losses to further improve performance.",
}
| To alleviate the data scarcity problem in End-to-end speech translation (ST), pre-training on data for speech recognition and machine translation is considered as an important technique. However, the modality gap between speech and text prevents the ST model from efficiently inheriting knowledge from the pre-trained models. In this work, we propose AdaTranS for end-to-end ST. It adapts the speech features with a new shrinking mechanism to mitigate the length mismatch between speech and text features by predicting word boundaries. Experiments on the MUST-C dataset demonstrate that AdaTranS achieves better performance than the other shrinking-based methods, with higher inference speed and lower memory usage. Further experiments also show that AdaTranS can be equipped with additional alignment losses to further improve performance. | [
"Zeng, Xingshan",
"Li, Liangyou",
"Liu, Qun"
] | AdaTranS: Adapting with Boundary-based Shrinking for End-to-End Speech Translation | findings-emnlp.154 | 2212.08911 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.155.bib | https://aclanthology.org/2023.findings-emnlp.155/ | @inproceedings{berezin-etal-2023-offence,
title = "No offence, Bert - {I} insult only humans! Multilingual sentence-level attack on toxicity detection networks",
author = "Berezin, Sergey and
Farahbakhsh, Reza and
Crespi, Noel",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.155",
doi = "10.18653/v1/2023.findings-emnlp.155",
pages = "2362--2369",
abstract = "We introduce a simple yet efficient sentence-level attack on black-box toxicity detector models. By adding several positive words or sentences to the end of a hateful message, we are able to change the prediction of a neural network and pass the toxicity detection system check. This approach is shown to be working on seven languages from three different language families. We also describe the defence mechanism against the aforementioned attack and discuss its limitations.",
}
| We introduce a simple yet efficient sentence-level attack on black-box toxicity detector models. By adding several positive words or sentences to the end of a hateful message, we are able to change the prediction of a neural network and pass the toxicity detection system check. This approach is shown to be working on seven languages from three different language families. We also describe the defence mechanism against the aforementioned attack and discuss its limitations. | [
"Berezin, Sergey",
"Farahbakhsh, Reza",
"Crespi, Noel"
] | No offence, Bert - I insult only humans! Multilingual sentence-level attack on toxicity detection networks | findings-emnlp.155 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.156.bib | https://aclanthology.org/2023.findings-emnlp.156/ | @inproceedings{caron-srivastava-2023-manipulating,
title = "Manipulating the Perceived Personality Traits of Language Models",
author = "Caron, Graham and
Srivastava, Shashank",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.156",
doi = "10.18653/v1/2023.findings-emnlp.156",
pages = "2370--2386",
abstract = "Psychology research has long explored aspects of human personality like extroversion, agreeableness and emotional stability, three of the personality traits that make up the {`}Big Five{'}. Categorizations like the {`}Big Five{'} are commonly used to assess and diagnose personality types. In this work, we explore whether text generated from large language models exhibits consistency in it{'}s perceived {`}Big Five{'} personality traits. For example, is a language model such as GPT2 likely to respond in a consistent way if asked to go out to a party? We also show that when exposed to different types of contexts (such as personality descriptions, or answers to diagnostic questions about personality traits), language models such as BERT and GPT2 consistently identify and mirror personality markers in those contexts. This behavior illustrates an ability to be manipulated in a predictable way (with correlations up to 0.84 between intended and realized changes in personality traits), and frames them as tools for controlling personas in applications such as dialog systems. We contribute two data-sets of personality descriptions of humans subjects.",
}
| Psychology research has long explored aspects of human personality like extroversion, agreeableness and emotional stability, three of the personality traits that make up the {`}Big Five{'}. Categorizations like the {`}Big Five{'} are commonly used to assess and diagnose personality types. In this work, we explore whether text generated from large language models exhibits consistency in it{'}s perceived {`}Big Five{'} personality traits. For example, is a language model such as GPT2 likely to respond in a consistent way if asked to go out to a party? We also show that when exposed to different types of contexts (such as personality descriptions, or answers to diagnostic questions about personality traits), language models such as BERT and GPT2 consistently identify and mirror personality markers in those contexts. This behavior illustrates an ability to be manipulated in a predictable way (with correlations up to 0.84 between intended and realized changes in personality traits), and frames them as tools for controlling personas in applications such as dialog systems. We contribute two data-sets of personality descriptions of humans subjects. | [
"Caron, Graham",
"Srivastava, Shashank"
] | Manipulating the Perceived Personality Traits of Language Models | findings-emnlp.156 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.157.bib | https://aclanthology.org/2023.findings-emnlp.157/ | @inproceedings{semnani-etal-2023-wikichat,
title = "{W}iki{C}hat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on {W}ikipedia",
author = "Semnani, Sina and
Yao, Violet and
Zhang, Heidi and
Lam, Monica",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.157",
doi = "10.18653/v1/2023.findings-emnlp.157",
pages = "2387--2413",
abstract = "This paper presents the first few-shot LLM-based chatbot that almost never hallucinates and has high conversationality and low latency. WikiChat is grounded on the English Wikipedia, the largest curated free-text corpus. WikiChat generates a response from an LLM, retains only the grounded facts, and combines them with additional information it retrieves from the corpus to form factual and engaging responses. We distill WikiChat based on GPT-4 into a 7B-parameter LLaMA model with minimal loss of quality, to significantly improve its latency, cost and privacy, and facilitate research and deployment. Using a novel hybrid human-and-LLM evaluation methodology, we show that our best system achieves 97.3{\%} factual accuracy in simulated conversations. It significantly outperforms all retrieval-based and LLM-based baselines, and by 3.9{\%}, 38.6{\%} and 51.0{\%} on head, tail and recent knowledge compared to GPT-4. Compared to previous state-of-the-art retrieval-based chatbots, WikiChat is also significantly more informative and engaging, just like an LLM. WikiChat achieves 97.9{\%} factual accuracy in conversations with human users about recent topics, 55.0{\%} better than GPT-4, while receiving significantly higher user ratings and more favorable comments.",
}
| This paper presents the first few-shot LLM-based chatbot that almost never hallucinates and has high conversationality and low latency. WikiChat is grounded on the English Wikipedia, the largest curated free-text corpus. WikiChat generates a response from an LLM, retains only the grounded facts, and combines them with additional information it retrieves from the corpus to form factual and engaging responses. We distill WikiChat based on GPT-4 into a 7B-parameter LLaMA model with minimal loss of quality, to significantly improve its latency, cost and privacy, and facilitate research and deployment. Using a novel hybrid human-and-LLM evaluation methodology, we show that our best system achieves 97.3{\%} factual accuracy in simulated conversations. It significantly outperforms all retrieval-based and LLM-based baselines, and by 3.9{\%}, 38.6{\%} and 51.0{\%} on head, tail and recent knowledge compared to GPT-4. Compared to previous state-of-the-art retrieval-based chatbots, WikiChat is also significantly more informative and engaging, just like an LLM. WikiChat achieves 97.9{\%} factual accuracy in conversations with human users about recent topics, 55.0{\%} better than GPT-4, while receiving significantly higher user ratings and more favorable comments. | [
"Semnani, Sina",
"Yao, Violet",
"Zhang, Heidi",
"Lam, Monica"
] | WikiChat: Stopping the Hallucination of Large Language Model Chatbots by Few-Shot Grounding on Wikipedia | findings-emnlp.157 | 2305.14292 | [
"https://github.com/stanford-oval/wikichat"
] | https://huggingface.co/papers/2305.14292 | 1 | 1 | 0 | 4 | [
"stanford-oval/Llama-2-7b-WikiChat",
"stanford-oval/Llama-2-7b-WikiChat-fused",
"RichardErkhov/stanford-oval_-_Llama-2-7b-WikiChat-fused-4bits",
"RichardErkhov/stanford-oval_-_Llama-2-7b-WikiChat-fused-8bits",
"RichardErkhov/stanford-oval_-_Llama-2-7b-WikiChat-fused-gguf",
"RichardErkhov/stanford-oval_-_Llama-2-7b-WikiChat-gguf"
] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.158.bib | https://aclanthology.org/2023.findings-emnlp.158/ | @inproceedings{aly-etal-2023-automated,
title = "Automated Few-Shot Classification with Instruction-Finetuned Language Models",
author = "Aly, Rami and
Shi, Xingjian and
Lin, Kaixiang and
Zhang, Aston and
Wilson, Andrew",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.158",
doi = "10.18653/v1/2023.findings-emnlp.158",
pages = "2414--2432",
abstract = "A particularly successful class of approaches for few-shot learning combines language models with prompts - hand-crafted task descriptions that complement data samples. However, designing prompts by hand for each task commonly requires domain knowledge and substantial guesswork. We observe, in the context of classification tasks, that instruction finetuned language models are remarkably robust towards some dimensions of a prompt{'}s design. We subsequently propose a simple method to eliminate the need for handcrafted prompts, named AuT-Few. This approach consists of (i) a prompt retrieval module that selects suitable task instructions from the instruction-tuning knowledge base, and (ii) the generation of two distinct, semantically meaningful, class descriptions and a selection mechanism via cross-validation. Over 12 datasets, spanning 8 classification tasks, we show that AuT-Few outperforms current state-of-the-art few-shot learning methods. Moreover, AuT-Few is the best ranking method across datasets on the RAFT few-shot benchmark. Notably, these results are achieved without task-specific handcrafted prompts on unseen tasks.",
}
| A particularly successful class of approaches for few-shot learning combines language models with prompts - hand-crafted task descriptions that complement data samples. However, designing prompts by hand for each task commonly requires domain knowledge and substantial guesswork. We observe, in the context of classification tasks, that instruction finetuned language models are remarkably robust towards some dimensions of a prompt{'}s design. We subsequently propose a simple method to eliminate the need for handcrafted prompts, named AuT-Few. This approach consists of (i) a prompt retrieval module that selects suitable task instructions from the instruction-tuning knowledge base, and (ii) the generation of two distinct, semantically meaningful, class descriptions and a selection mechanism via cross-validation. Over 12 datasets, spanning 8 classification tasks, we show that AuT-Few outperforms current state-of-the-art few-shot learning methods. Moreover, AuT-Few is the best ranking method across datasets on the RAFT few-shot benchmark. Notably, these results are achieved without task-specific handcrafted prompts on unseen tasks. | [
"Aly, Rami",
"Shi, Xingjian",
"Lin, Kaixiang",
"Zhang, Aston",
"Wilson, Andrew"
] | Automated Few-Shot Classification with Instruction-Finetuned Language Models | findings-emnlp.158 | 2305.12576 | [
"https://github.com/raldir/aut-few"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.159.bib | https://aclanthology.org/2023.findings-emnlp.159/ | @inproceedings{ha-etal-2023-meta,
title = "Meta-Learning of Prompt Generation for Lightweight Prompt Engineering on Language-Model-as-a-Service",
author = "Ha, Hyeonmin and
Lee, Jihye and
Han, Wookje and
Chun, Byung-Gon",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.159",
doi = "10.18653/v1/2023.findings-emnlp.159",
pages = "2433--2445",
abstract = "Recently, many companies have been providing the capabilities of large language models as services. These Language-Model-as-a-Service (LMaaS) offerings support a variety of user tasks through in-context learning from prompts, which include instructions and demonstrations of the task. However, for users, manually crafting prompts or running automatic prompt tuning methods themselves can be demanding. Despite these challenges, LMaaS providers do not offer automatic prompt engineering methods as part of their services. One of the major obstacles to deploying them on an LMaaS is the heavy computational costs associated with automatic prompt engineering methods. These methods are typically designed to iterate through tens of thousands of examples, which impose unaffordable overheads for LMaaS providers. In this paper, we introduce MetaL-Prompt, a novel lightweight automatic prompt generation method for LMaaS. MetaL-Prompt meta-trains a prompt generation model (PGM) to enable robust learning by the language model from the contexts created by the generated prompts (i.e., in-context learning). Thanks to our meta-learning approach, a PGM can generate prompts for unseen tasks without requiring additional training for those specific tasks. Furthermore, the PGM can generate prompts with a single forward pass, significantly reducing computational costs compared to previous methods. We evaluate MetaL-Prompt on a range of unseen tasks and find that it improves performance by up to 19.4{\%} in terms of mean F1 score on QA datasets compared to the state-of-the-art baseline P-tuning, with limited computational cost.",
}
| Recently, many companies have been providing the capabilities of large language models as services. These Language-Model-as-a-Service (LMaaS) offerings support a variety of user tasks through in-context learning from prompts, which include instructions and demonstrations of the task. However, for users, manually crafting prompts or running automatic prompt tuning methods themselves can be demanding. Despite these challenges, LMaaS providers do not offer automatic prompt engineering methods as part of their services. One of the major obstacles to deploying them on an LMaaS is the heavy computational costs associated with automatic prompt engineering methods. These methods are typically designed to iterate through tens of thousands of examples, which impose unaffordable overheads for LMaaS providers. In this paper, we introduce MetaL-Prompt, a novel lightweight automatic prompt generation method for LMaaS. MetaL-Prompt meta-trains a prompt generation model (PGM) to enable robust learning by the language model from the contexts created by the generated prompts (i.e., in-context learning). Thanks to our meta-learning approach, a PGM can generate prompts for unseen tasks without requiring additional training for those specific tasks. Furthermore, the PGM can generate prompts with a single forward pass, significantly reducing computational costs compared to previous methods. We evaluate MetaL-Prompt on a range of unseen tasks and find that it improves performance by up to 19.4{\%} in terms of mean F1 score on QA datasets compared to the state-of-the-art baseline P-tuning, with limited computational cost. | [
"Ha, Hyeonmin",
"Lee, Jihye",
"Han, Wookje",
"Chun, Byung-Gon"
] | Meta-Learning of Prompt Generation for Lightweight Prompt Engineering on Language-Model-as-a-Service | findings-emnlp.159 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.160.bib | https://aclanthology.org/2023.findings-emnlp.160/ | @inproceedings{yuan-etal-2023-beneath,
title = "Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction",
author = "Yuan, Siyu and
Chen, Jiangjie and
Ge, Xuyang and
Xiao, Yanghua and
Yang, Deqing",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.160",
doi = "10.18653/v1/2023.findings-emnlp.160",
pages = "2446--2460",
abstract = "The vital role of analogical reasoning in human cognition allows us to grasp novel concepts by linking them with familiar ones through shared relational structures. Despite the attention previous research has given to word analogies, this work suggests that Large Language Models (LLMs) often overlook the structures that underpin these analogies, raising questions about the efficacy of word analogies as a measure of analogical reasoning skills akin to human cognition. In response to this, our paper introduces a task of analogical structure abduction, grounded in cognitive psychology, designed to abduce structures that form an analogy between two systems. In support of this task, we establish a benchmark called SCAR, containing 400 scientific analogies from 13 distinct fields, tailored for evaluating analogical reasoning with structure abduction. The empirical evidence underlines the continued challenges faced by LLMs, including ChatGPT and GPT-4, in mastering this task, signifying the need for future exploration to enhance their abilities.",
}
| The vital role of analogical reasoning in human cognition allows us to grasp novel concepts by linking them with familiar ones through shared relational structures. Despite the attention previous research has given to word analogies, this work suggests that Large Language Models (LLMs) often overlook the structures that underpin these analogies, raising questions about the efficacy of word analogies as a measure of analogical reasoning skills akin to human cognition. In response to this, our paper introduces a task of analogical structure abduction, grounded in cognitive psychology, designed to abduce structures that form an analogy between two systems. In support of this task, we establish a benchmark called SCAR, containing 400 scientific analogies from 13 distinct fields, tailored for evaluating analogical reasoning with structure abduction. The empirical evidence underlines the continued challenges faced by LLMs, including ChatGPT and GPT-4, in mastering this task, signifying the need for future exploration to enhance their abilities. | [
"Yuan, Siyu",
"Chen, Jiangjie",
"Ge, Xuyang",
"Xiao, Yanghua",
"Yang, Deqing"
] | Beneath Surface Similarity: Large Language Models Make Reasonable Scientific Analogies after Structure Abduction | findings-emnlp.160 | 2305.12660 | [
"https://github.com/siyuyuan/scar"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.161.bib | https://aclanthology.org/2023.findings-emnlp.161/ | @inproceedings{wu-etal-2023-hicl,
title = "{H}i{CL}: Hierarchical Contrastive Learning of Unsupervised Sentence Embeddings",
author = "Wu, Zhuofeng and
Xiao, Chaowei and
Vydiswaran, VG Vinod",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.161",
doi = "10.18653/v1/2023.findings-emnlp.161",
pages = "2461--2476",
abstract = "In this paper, we propose a hierarchical contrastive learning framework, HiCL, which considers local segment-level and global sequence-level relationships to improve training efficiency and effectiveness. Traditional methods typically encode a sequence in its entirety for contrast with others, often neglecting local representation learning, leading to challenges in generalizing to shorter texts. Conversely, HiCL improves its effectiveness by dividing the sequence into several segments and employing both local and global contrastive learning to model segment-level and sequence-level relationships. Further, considering the quadratic time complexity of transformers over input tokens, HiCL boosts training efficiency by first encoding short segments and then aggregating them to obtain the sequence representation. Extensive experiments show that HiCL enhances the prior top-performing SNCSE model across seven extensively evaluated STS tasks, with an average increase of +0.2{\%} observed on $BERT_{large}$ and +0.44{\%} on $RoBERTa_{large}$.",
}
| In this paper, we propose a hierarchical contrastive learning framework, HiCL, which considers local segment-level and global sequence-level relationships to improve training efficiency and effectiveness. Traditional methods typically encode a sequence in its entirety for contrast with others, often neglecting local representation learning, leading to challenges in generalizing to shorter texts. Conversely, HiCL improves its effectiveness by dividing the sequence into several segments and employing both local and global contrastive learning to model segment-level and sequence-level relationships. Further, considering the quadratic time complexity of transformers over input tokens, HiCL boosts training efficiency by first encoding short segments and then aggregating them to obtain the sequence representation. Extensive experiments show that HiCL enhances the prior top-performing SNCSE model across seven extensively evaluated STS tasks, with an average increase of +0.2{\%} observed on $BERT_{large}$ and +0.44{\%} on $RoBERTa_{large}$. | [
"Wu, Zhuofeng",
"Xiao, Chaowei",
"Vydiswaran, VG Vinod"
] | HiCL: Hierarchical Contrastive Learning of Unsupervised Sentence Embeddings | findings-emnlp.161 | 2310.09720 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.162.bib | https://aclanthology.org/2023.findings-emnlp.162/ | @inproceedings{wu-etal-2023-density,
title = "Density-Aware Prototypical Network for Few-Shot Relation Classification",
author = "Wu, Jianfeng and
Hu, Mengting and
Wu, Yike and
Wu, Bingzhe and
Xie, Yalan and
Liu, Mingming and
Cheng, Renhong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.162",
doi = "10.18653/v1/2023.findings-emnlp.162",
pages = "2477--2489",
abstract = "In recent years, few-shot relation classification has evoked many research interests. Yet a more challenging problem, i.e. none-of-the-above (NOTA), is under-explored. Existing works mainly regard NOTA as an extra class and treat it the same as known relations. However, such a solution ignores the overall instance distribution, where NOTA instances are actually outliers and distributed unnaturally compared with known ones. In this paper, we propose a density-aware prototypical network (D-Proto) to treat various instances distinctly. Specifically, we design unique training objectives to separate known instances and isolate NOTA instances, respectively. This produces an ideal instance distribution, where known instances are dense yet NOTAs have a small density. Moreover, we propose a NOTA detection module to further enlarge the density of known samples, and discriminate NOTA and known samples accurately. Experimental results demonstrate that the proposed method outperforms strong baselines with robustness towards various NOTA rates. The code will be made public after the paper is accepted.",
}
| In recent years, few-shot relation classification has evoked many research interests. Yet a more challenging problem, i.e. none-of-the-above (NOTA), is under-explored. Existing works mainly regard NOTA as an extra class and treat it the same as known relations. However, such a solution ignores the overall instance distribution, where NOTA instances are actually outliers and distributed unnaturally compared with known ones. In this paper, we propose a density-aware prototypical network (D-Proto) to treat various instances distinctly. Specifically, we design unique training objectives to separate known instances and isolate NOTA instances, respectively. This produces an ideal instance distribution, where known instances are dense yet NOTAs have a small density. Moreover, we propose a NOTA detection module to further enlarge the density of known samples, and discriminate NOTA and known samples accurately. Experimental results demonstrate that the proposed method outperforms strong baselines with robustness towards various NOTA rates. The code will be made public after the paper is accepted. | [
"Wu, Jianfeng",
"Hu, Mengting",
"Wu, Yike",
"Wu, Bingzhe",
"Xie, Yalan",
"Liu, Mingming",
"Cheng, Renhong"
] | Density-Aware Prototypical Network for Few-Shot Relation Classification | findings-emnlp.162 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.163.bib | https://aclanthology.org/2023.findings-emnlp.163/ | @inproceedings{yang-etal-2023-improved,
title = "Improved Training of Deep Text Clustering",
author = "Yang, Zonghao and
Hu, Wenpeng and
Tan, Yushan and
Luo, Zhunchen",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.163",
doi = "10.18653/v1/2023.findings-emnlp.163",
pages = "2490--2499",
abstract = "The classical deep clustering optimization methods basically leverage information such as clustering centers, mutual information, and distance metrics to construct implicit generalized labels to establish information feedback (weak supervision) and thus optimize the deep model. However, the resulting generalized labels have different degrees of errors in the whole clustering process due to the limitation of clustering accuracy, which greatly interferes with the clustering process. To this end, this paper proposes a general deep clustering optimization method from the perspective of empirical risk minimization, using the correlation relationship between the samples. Experiments on two classical deep clustering methods demonstrate the necessity and effectiveness of the method. Code is available at https://github.com/yangzonghao1024/DCGLU.",
}
| The classical deep clustering optimization methods basically leverage information such as clustering centers, mutual information, and distance metrics to construct implicit generalized labels to establish information feedback (weak supervision) and thus optimize the deep model. However, the resulting generalized labels have different degrees of errors in the whole clustering process due to the limitation of clustering accuracy, which greatly interferes with the clustering process. To this end, this paper proposes a general deep clustering optimization method from the perspective of empirical risk minimization, using the correlation relationship between the samples. Experiments on two classical deep clustering methods demonstrate the necessity and effectiveness of the method. Code is available at https://github.com/yangzonghao1024/DCGLU. | [
"Yang, Zonghao",
"Hu, Wenpeng",
"Tan, Yushan",
"Luo, Zhunchen"
] | Improved Training of Deep Text Clustering | findings-emnlp.163 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.164.bib | https://aclanthology.org/2023.findings-emnlp.164/ | @inproceedings{deng-etal-2023-regavae,
title = "{R}ega{VAE}: A Retrieval-Augmented {G}aussian Mixture Variational Auto-Encoder for Language Modeling",
author = "Deng, Jingcheng and
Pang, Liang and
Shen, Huawei and
Cheng, Xueqi",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.164",
doi = "10.18653/v1/2023.findings-emnlp.164",
pages = "2500--2510",
abstract = "Retrieval-augmented language models show promise in addressing issues like outdated information and hallucinations in language models (LMs). However, current research faces two main problems: 1) determining what information to retrieve, and 2) effectively combining retrieved information during generation. We argue that valuable retrieved information should not only be related to the current source text but also consider the future target text, given the nature of LMs that model future tokens. Moreover, we propose that aggregation using latent variables derived from a compact latent space is more efficient than utilizing explicit raw text, which is limited by context length and susceptible to noise. Therefore, we introduce RegaVAE, a retrieval-augmented language model built upon the variational auto-encoder (VAE). It encodes the text corpus into a latent space, capturing current and future information from both source and target text. Additionally, we leverage the VAE to initialize the latent space and adopt the probabilistic form of the retrieval generation paradigm by expanding the Gaussian prior distribution into a Gaussian mixture distribution. Theoretical analysis provides an optimizable upper bound for RegaVAE. Experimental results on various datasets demonstrate significant improvements in text generation quality and hallucination removal.",
}
| Retrieval-augmented language models show promise in addressing issues like outdated information and hallucinations in language models (LMs). However, current research faces two main problems: 1) determining what information to retrieve, and 2) effectively combining retrieved information during generation. We argue that valuable retrieved information should not only be related to the current source text but also consider the future target text, given the nature of LMs that model future tokens. Moreover, we propose that aggregation using latent variables derived from a compact latent space is more efficient than utilizing explicit raw text, which is limited by context length and susceptible to noise. Therefore, we introduce RegaVAE, a retrieval-augmented language model built upon the variational auto-encoder (VAE). It encodes the text corpus into a latent space, capturing current and future information from both source and target text. Additionally, we leverage the VAE to initialize the latent space and adopt the probabilistic form of the retrieval generation paradigm by expanding the Gaussian prior distribution into a Gaussian mixture distribution. Theoretical analysis provides an optimizable upper bound for RegaVAE. Experimental results on various datasets demonstrate significant improvements in text generation quality and hallucination removal. | [
"Deng, Jingcheng",
"Pang, Liang",
"Shen, Huawei",
"Cheng, Xueqi"
] | RegaVAE: A Retrieval-Augmented Gaussian Mixture Variational Auto-Encoder for Language Modeling | findings-emnlp.164 | 2310.10567 | [
"https://github.com/trustedllm/regavae"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.165.bib | https://aclanthology.org/2023.findings-emnlp.165/ | @inproceedings{yang-etal-2023-refgpt,
title = "{R}ef{GPT}: Dialogue Generation of {GPT}, by {GPT}, and for {GPT}",
author = "Yang, Dongjie and
Yuan, Ruifeng and
Fan, Yuantao and
Yang, Yifei and
Wang, Zili and
Wang, Shusen and
Zhao, Hai",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.165",
doi = "10.18653/v1/2023.findings-emnlp.165",
pages = "2511--2535",
abstract = "Large Language Models (LLMs) have attained the impressive capability to resolve a wide range of NLP tasks by fine-tuning high-quality instruction data. However, collecting human-written data of high quality, especially multi-turn dialogues, is expensive and unattainable for most people. Though previous studies have used powerful LLMs to generate the dialogues automatically, they all suffer from generating untruthful dialogues because of the model hallucination. Therefore, we propose a method called RefGPT to generate enormous truthful and customized dialogues without worrying about factual errors caused by the model hallucination. RefGPT solves the model hallucination in dialogue generation by restricting the LLMs to leverage the given reference instead of reciting their own knowledge to generate dialogues. Additionally, RefGPT adds detailed controls on every utterance to enable high customization capability, which previous studies have ignored. On the basis of RefGPT, we also propose two high-quality dialogue datasets generated by GPT-4, namely **RefGPT-Fact** and **RefGPT-Code**. RefGPT-Fact is a dataset with 100k multi-turn dialogues based on factual knowledge and RefGPT-Code has 76k multi-turn dialogues covering a wide range of coding scenarios. Our code and datasets are released in https://github.com/mutonix/RefGPT.",
}
| Large Language Models (LLMs) have attained the impressive capability to resolve a wide range of NLP tasks by fine-tuning high-quality instruction data. However, collecting human-written data of high quality, especially multi-turn dialogues, is expensive and unattainable for most people. Though previous studies have used powerful LLMs to generate the dialogues automatically, they all suffer from generating untruthful dialogues because of the model hallucination. Therefore, we propose a method called RefGPT to generate enormous truthful and customized dialogues without worrying about factual errors caused by the model hallucination. RefGPT solves the model hallucination in dialogue generation by restricting the LLMs to leverage the given reference instead of reciting their own knowledge to generate dialogues. Additionally, RefGPT adds detailed controls on every utterance to enable high customization capability, which previous studies have ignored. On the basis of RefGPT, we also propose two high-quality dialogue datasets generated by GPT-4, namely **RefGPT-Fact** and **RefGPT-Code**. RefGPT-Fact is a dataset with 100k multi-turn dialogues based on factual knowledge and RefGPT-Code has 76k multi-turn dialogues covering a wide range of coding scenarios. Our code and datasets are released in https://github.com/mutonix/RefGPT. | [
"Yang, Dongjie",
"Yuan, Ruifeng",
"Fan, Yuantao",
"Yang, Yifei",
"Wang, Zili",
"Wang, Shusen",
"Zhao, Hai"
] | RefGPT: Dialogue Generation of GPT, by GPT, and for GPT | findings-emnlp.165 | 2305.14994 | [
"https://github.com/mutonix/refgpt"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.166.bib | https://aclanthology.org/2023.findings-emnlp.166/ | @inproceedings{ahmad-etal-2023-ina,
title = "{INA}: An Integrative Approach for Enhancing Negotiation Strategies with Reward-Based Dialogue Agent",
author = "Ahmad, Zishan and
Saurabh, Suman and
Menon, Vaishakh and
Ekbal, Asif and
Ramnani, Roshni and
Maitra, Anutosh",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.166",
doi = "10.18653/v1/2023.findings-emnlp.166",
pages = "2536--2549",
abstract = "In this paper, we propose a novel negotiation agent designed for the online marketplace. Our dialogue agent is integrative in nature i.e, it possesses the capability to negotiate on price as well as other factors, such as the addition or removal of items from a deal bundle, thereby offering a more flexible and comprehensive negotiation experience. To enable this functionality, we create a new dataset called Integrative Negotiation Dataset (IND). For this dataset creation, we introduce a new semi-automated data creation method, which combines defining negotiation intents, actions, and intent-action simulation between users and the agent to generate potential dialogue flows. Finally, the prompting of GPT-J, a state-of-the-art language model, is done to generate dialogues for a given intent, with a human-in-the-loop process for post-editing and refining minor errors to ensure high data quality. We first train a maximum likelihood loss based model on IND, and then employ a set of novel rewards specifically tailored for the negotiation task to train our Integrative Negotiation Agent (INA). These rewards incentivize the agent to learn effective negotiation strategies that can adapt to various contextual requirements and price proposals. We train our model and conduct experiments to evaluate the effectiveness of our reward-based dialogue agent for negotiation. Our results demonstrate that the proposed approach and reward functions significantly enhance the negotiation capabilities of the dialogue agent. The INA successfully engages in integrative negotiations, displaying the ability to dynamically adjust prices and negotiate the inclusion or exclusion of items in a deal bundle.",
}
| In this paper, we propose a novel negotiation agent designed for the online marketplace. Our dialogue agent is integrative in nature i.e, it possesses the capability to negotiate on price as well as other factors, such as the addition or removal of items from a deal bundle, thereby offering a more flexible and comprehensive negotiation experience. To enable this functionality, we create a new dataset called Integrative Negotiation Dataset (IND). For this dataset creation, we introduce a new semi-automated data creation method, which combines defining negotiation intents, actions, and intent-action simulation between users and the agent to generate potential dialogue flows. Finally, the prompting of GPT-J, a state-of-the-art language model, is done to generate dialogues for a given intent, with a human-in-the-loop process for post-editing and refining minor errors to ensure high data quality. We first train a maximum likelihood loss based model on IND, and then employ a set of novel rewards specifically tailored for the negotiation task to train our Integrative Negotiation Agent (INA). These rewards incentivize the agent to learn effective negotiation strategies that can adapt to various contextual requirements and price proposals. We train our model and conduct experiments to evaluate the effectiveness of our reward-based dialogue agent for negotiation. Our results demonstrate that the proposed approach and reward functions significantly enhance the negotiation capabilities of the dialogue agent. The INA successfully engages in integrative negotiations, displaying the ability to dynamically adjust prices and negotiate the inclusion or exclusion of items in a deal bundle. | [
"Ahmad, Zishan",
"Saurabh, Suman",
"Menon, Vaishakh",
"Ekbal, Asif",
"Ramnani, Roshni",
"Maitra, Anutosh"
] | INA: An Integrative Approach for Enhancing Negotiation Strategies with Reward-Based Dialogue Agent | findings-emnlp.166 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.167.bib | https://aclanthology.org/2023.findings-emnlp.167/ | @inproceedings{weng-etal-2023-large,
title = "Large Language Models are Better Reasoners with Self-Verification",
author = "Weng, Yixuan and
Zhu, Minjun and
Xia, Fei and
Li, Bin and
He, Shizhu and
Liu, Shengping and
Sun, Bin and
Liu, Kang and
Zhao, Jun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.167",
doi = "10.18653/v1/2023.findings-emnlp.167",
pages = "2550--2575",
abstract = "Recently, with the chain of thought (CoT) prompting, large language models (LLMs), e.g., GPT-3, have shown strong reasoning ability in several natural language processing tasks such as arithmetic, commonsense, and logical reasoning. However, LLMs with CoT require multi-step prompting and multi-token prediction, which is highly sensitive to individual mistakes and vulnerable to error accumulation. The above issues make the LLMs need the ability to verify the answers. In fact, after inferring conclusions in some thinking decision tasks, people often check them by re-verifying steps to avoid some mistakes. In this paper, we propose and prove that LLMs also have similar self-verification abilities. We take the conclusion obtained by CoT as one of the conditions for solving the original problem. By performing a backward verification of the answers that LLM deduced for itself, we can obtain interpretable answer validation scores to select the candidate answer with the highest score. Experimental results demonstrate that the proposed method can improve the reasoning performance on various arithmetic, commonsense, and logical reasoning datasets. Our code is publicly available at: https://github.com/WENGSYX/Self-Verification.",
}
| Recently, with the chain of thought (CoT) prompting, large language models (LLMs), e.g., GPT-3, have shown strong reasoning ability in several natural language processing tasks such as arithmetic, commonsense, and logical reasoning. However, LLMs with CoT require multi-step prompting and multi-token prediction, which is highly sensitive to individual mistakes and vulnerable to error accumulation. The above issues make the LLMs need the ability to verify the answers. In fact, after inferring conclusions in some thinking decision tasks, people often check them by re-verifying steps to avoid some mistakes. In this paper, we propose and prove that LLMs also have similar self-verification abilities. We take the conclusion obtained by CoT as one of the conditions for solving the original problem. By performing a backward verification of the answers that LLM deduced for itself, we can obtain interpretable answer validation scores to select the candidate answer with the highest score. Experimental results demonstrate that the proposed method can improve the reasoning performance on various arithmetic, commonsense, and logical reasoning datasets. Our code is publicly available at: https://github.com/WENGSYX/Self-Verification. | [
"Weng, Yixuan",
"Zhu, Minjun",
"Xia, Fei",
"Li, Bin",
"He, Shizhu",
"Liu, Shengping",
"Sun, Bin",
"Liu, Kang",
"Zhao, Jun"
] | Large Language Models are Better Reasoners with Self-Verification | findings-emnlp.167 | 2212.09561 | [
"https://github.com/WENGSYX/Self-Verification"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.168.bib | https://aclanthology.org/2023.findings-emnlp.168/ | @inproceedings{du-etal-2023-multi,
title = "Multi-Granularity Information Interaction Framework for Incomplete Utterance Rewriting",
author = "Du, Haowei and
Zhang, Dinghao and
Li, Chen and
Li, Yang and
Zhao, Dongyan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.168",
doi = "10.18653/v1/2023.findings-emnlp.168",
pages = "2576--2581",
abstract = "Recent approaches in Incomplete Utterance Rewriting (IUR) fail to capture the source of important words, which is crucial to edit the incomplete utterance, and introduce words from irrelevant utterances. We propose a novel and effective multi-task information interaction framework including context selection, edit matrix construction, and relevance merging to capture the multi-granularity of semantic information. Benefiting from fetching the relevant utterance and figuring out the important words, our approach outperforms existing state-of-the-art models on two benchmark datasets Restoration-200K and CANAND in this field.",
}
| Recent approaches in Incomplete Utterance Rewriting (IUR) fail to capture the source of important words, which is crucial to edit the incomplete utterance, and introduce words from irrelevant utterances. We propose a novel and effective multi-task information interaction framework including context selection, edit matrix construction, and relevance merging to capture the multi-granularity of semantic information. Benefiting from fetching the relevant utterance and figuring out the important words, our approach outperforms existing state-of-the-art models on two benchmark datasets Restoration-200K and CANAND in this field. | [
"Du, Haowei",
"Zhang, Dinghao",
"Li, Chen",
"Li, Yang",
"Zhao, Dongyan"
] | Multi-Granularity Information Interaction Framework for Incomplete Utterance Rewriting | findings-emnlp.168 | 2312.11945 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.169.bib | https://aclanthology.org/2023.findings-emnlp.169/ | @inproceedings{vansh-etal-2023-accuracy,
title = "Accuracy is not enough: Evaluating Personalization in Summarizers",
author = "Vansh, Rahul and
Rank, Darsh and
Dasgupta, Sourish and
Chakraborty, Tanmoy",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.169",
doi = "10.18653/v1/2023.findings-emnlp.169",
pages = "2582--2595",
abstract = "Text summarization models are evaluated in terms of their accuracy and quality using various measures such as ROUGE, BLEU, METEOR, BERTScore, PYRAMID, readability, and several other recently proposed ones. The central objective of all accuracy measures is to evaluate the model{'}s ability to capture $\textit{saliency}$ accurately. Since saliency is subjective w.r.t the readers{'} preferences, there cannot be a fit-all summary for a given document. This means that in many use-cases, summarization models need to be personalized w.r.t user-profiles. However, to our knowledge, there is no measure to evaluate the $\textit{degree-of-personalization}$ of a summarization model. In this paper, we first establish that existing accuracy measures cannot evaluate the degree of personalization of any summarization model, and then propose a novel measure, called $EGISES$, for automatically computing the same. Using the PENS dataset released by Microsoft Research, we analyze the degree of personalization of ten different state-of-the-art summarization models (both extractive and abstractive), five of which are explicitly trained for personalized summarization, and the remaining are appropriated to exhibit personalization. We conclude by proposing a generalized accuracy measure, called $P$-$Accuracy$, for designing accuracy measures that should also take personalization into account and demonstrate the robustness and reliability of the measure through meta-evaluation.",
}
| Text summarization models are evaluated in terms of their accuracy and quality using various measures such as ROUGE, BLEU, METEOR, BERTScore, PYRAMID, readability, and several other recently proposed ones. The central objective of all accuracy measures is to evaluate the model{'}s ability to capture $\textit{saliency}$ accurately. Since saliency is subjective w.r.t the readers{'} preferences, there cannot be a fit-all summary for a given document. This means that in many use-cases, summarization models need to be personalized w.r.t user-profiles. However, to our knowledge, there is no measure to evaluate the $\textit{degree-of-personalization}$ of a summarization model. In this paper, we first establish that existing accuracy measures cannot evaluate the degree of personalization of any summarization model, and then propose a novel measure, called $EGISES$, for automatically computing the same. Using the PENS dataset released by Microsoft Research, we analyze the degree of personalization of ten different state-of-the-art summarization models (both extractive and abstractive), five of which are explicitly trained for personalized summarization, and the remaining are appropriated to exhibit personalization. We conclude by proposing a generalized accuracy measure, called $P$-$Accuracy$, for designing accuracy measures that should also take personalization into account and demonstrate the robustness and reliability of the measure through meta-evaluation. | [
"Vansh, Rahul",
"Rank, Darsh",
"Dasgupta, Sourish",
"Chakraborty, Tanmoy"
] | Accuracy is not enough: Evaluating Personalization in Summarizers | findings-emnlp.169 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.170.bib | https://aclanthology.org/2023.findings-emnlp.170/ | @inproceedings{mersinias-mahowald-2023-generated,
title = "For Generated Text, Is {NLI}-Neutral Text the Best Text?",
author = "Mersinias, Michail and
Mahowald, Kyle",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.170",
doi = "10.18653/v1/2023.findings-emnlp.170",
pages = "2596--2602",
abstract = "We explore incorporating natural language inference (NLI) into the text generative pipeline by using a pre-trained NLI model to assess whether a generated sentence entails, contradicts, or is neutral to the prompt and preceding text. First, we show that the NLI task is predictive of generation errors made by GPT-3. We use these results to develop an NLI-informed generation procedure for GPT-J. Then, we evaluate these generations by obtaining human annotations on error types and overall quality. We find that an NLI strategy of maximizing entailment improves text generation when the nucleus sampling randomness parameter value is high, while one which maximizes contradiction is in fact productive when the parameter value is low. Overall, though, we demonstrate that an NLI strategy of maximizing the neutral class provides the highest quality of generated text (significantly better than the vanilla generations), regardless of parameter value.",
}
| We explore incorporating natural language inference (NLI) into the text generative pipeline by using a pre-trained NLI model to assess whether a generated sentence entails, contradicts, or is neutral to the prompt and preceding text. First, we show that the NLI task is predictive of generation errors made by GPT-3. We use these results to develop an NLI-informed generation procedure for GPT-J. Then, we evaluate these generations by obtaining human annotations on error types and overall quality. We find that an NLI strategy of maximizing entailment improves text generation when the nucleus sampling randomness parameter value is high, while one which maximizes contradiction is in fact productive when the parameter value is low. Overall, though, we demonstrate that an NLI strategy of maximizing the neutral class provides the highest quality of generated text (significantly better than the vanilla generations), regardless of parameter value. | [
"Mersinias, Michail",
"Mahowald, Kyle"
] | For Generated Text, Is NLI-Neutral Text the Best Text? | findings-emnlp.170 | 2302.08577 | [
"https://github.com/michael-mersinias/nli_text_generation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.171.bib | https://aclanthology.org/2023.findings-emnlp.171/ | @inproceedings{hezam-stevenson-2023-combining,
title = "Combining Counting Processes and Classification Improves a Stopping Rule for Technology Assisted Review",
author = "Bin-Hezam, Reem and
Stevenson, Mark",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.171",
doi = "10.18653/v1/2023.findings-emnlp.171",
pages = "2603--2609",
abstract = "Technology Assisted Review (TAR) stopping rules aim to reduce the cost of manually assessing documents for relevance by minimising the number of documents that need to be examined to ensure a desired level of recall. This paper extends an effective stopping rule using information derived from a text classifier that can be trained without the need for any additional annotation. Experiments on multiple data sets (CLEF e-Health, TREC Total Recall, TREC Legal and RCV1) showed that the proposed approach consistently improves performance and outperforms several alternative methods.",
}
| Technology Assisted Review (TAR) stopping rules aim to reduce the cost of manually assessing documents for relevance by minimising the number of documents that need to be examined to ensure a desired level of recall. This paper extends an effective stopping rule using information derived from a text classifier that can be trained without the need for any additional annotation. Experiments on multiple data sets (CLEF e-Health, TREC Total Recall, TREC Legal and RCV1) showed that the proposed approach consistently improves performance and outperforms several alternative methods. | [
"Bin-Hezam, Reem",
"Stevenson, Mark"
] | Combining Counting Processes and Classification Improves a Stopping Rule for Technology Assisted Review | findings-emnlp.171 | 2312.03171 | [
"https://github.com/reembinhezam/tar_stopping_cp_clf"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.172.bib | https://aclanthology.org/2023.findings-emnlp.172/ | @inproceedings{vakil-amiri-2023-complexity,
title = "Complexity-Guided Curriculum Learning for Text Graphs",
author = "Vakil, Nidhi and
Amiri, Hadi",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.172",
doi = "10.18653/v1/2023.findings-emnlp.172",
pages = "2610--2626",
abstract = "Curriculum learning provides a systematic approach to training. It refines training progressively, tailors training to task requirements, and improves generalization through exposure to diverse examples. We present a curriculum learning approach that builds on existing knowledge about text and graph complexity formalisms for training with text graph data. The core part of our approach is a novel data scheduler, which employs {``}spaced repetition{''} and complexity formalisms to guide the training process. We demonstrate the effectiveness of the proposed approach on several text graph tasks and graph neural network architectures. The proposed model gains more and uses less data; consistently prefers text over graph complexity indices throughout training, while the best curricula derived from text and graph complexity indices are equally effective; and it learns transferable curricula across GNN models and datasets. In addition, we find that both node-level (local) and graph-level (global) graph complexity indices, as well as shallow and traditional text complexity indices play a crucial role in effective curriculum learning.",
}
| Curriculum learning provides a systematic approach to training. It refines training progressively, tailors training to task requirements, and improves generalization through exposure to diverse examples. We present a curriculum learning approach that builds on existing knowledge about text and graph complexity formalisms for training with text graph data. The core part of our approach is a novel data scheduler, which employs {``}spaced repetition{''} and complexity formalisms to guide the training process. We demonstrate the effectiveness of the proposed approach on several text graph tasks and graph neural network architectures. The proposed model gains more and uses less data; consistently prefers text over graph complexity indices throughout training, while the best curricula derived from text and graph complexity indices are equally effective; and it learns transferable curricula across GNN models and datasets. In addition, we find that both node-level (local) and graph-level (global) graph complexity indices, as well as shallow and traditional text complexity indices play a crucial role in effective curriculum learning. | [
"Vakil, Nidhi",
"Amiri, Hadi"
] | Complexity-Guided Curriculum Learning for Text Graphs | findings-emnlp.172 | 2311.13472 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.173.bib | https://aclanthology.org/2023.findings-emnlp.173/ | @inproceedings{ren-etal-2023-covariance,
title = "{C}o{V}ariance-based Causal Debiasing for Entity and Relation Extraction",
author = "Ren, Lin and
Liu, Yongbin and
Cao, Yixin and
Ouyang, Chunping",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.173",
doi = "10.18653/v1/2023.findings-emnlp.173",
pages = "2627--2640",
abstract = "Joint entity and relation extraction tasks aim to recognize named entities and extract relations simultaneously. Suffering from a variety of data biases, such as data selection bias, and distribution bias (out of distribution, long-tail distribution), serious concerns can be witnessed to threaten the model{'}s transferability, robustness, and generalization. In this work, we address the above problems from a causality perspective. We propose a novel causal framework called c$\underline{\textbf{o}}$variance and $\underline{\textbf{v}}$ariance $\underline{\textbf{o}}$ptimization framework (OVO) to optimize feature representations and conduct general debiasing. In particular, the proposed $\underline{\textbf{c}}$ovariance $\underline{\textbf{op}}$timizing (COP) minimizes characterizing features{'} covariance for alleviating the selection and distribution bias and enhances feature representation in the feature space. Furthermore, based on the causal backdoor adjustment, we propose $\\underline{\textbf{v}}$ariance $\underline{\textbf{op}}$timizing (VOP) separates samples in terms of label information and minimizes the variance of each dimension in the feature vectors of the same class label for mitigating the distribution bias further. By applying it to three strong baselines in two widely used datasets, the results demonstrate the effectiveness and generalization of OVO for joint entity and relation extraction tasks. Furthermore, a fine-grained analysis reveals that OVO possesses the capability to mitigate the impact of long-tail distribution.",
}
| Joint entity and relation extraction tasks aim to recognize named entities and extract relations simultaneously. Suffering from a variety of data biases, such as data selection bias, and distribution bias (out of distribution, long-tail distribution), serious concerns can be witnessed to threaten the model{'}s transferability, robustness, and generalization. In this work, we address the above problems from a causality perspective. We propose a novel causal framework called c$\underline{\textbf{o}}$variance and $\underline{\textbf{v}}$ariance $\underline{\textbf{o}}$ptimization framework (OVO) to optimize feature representations and conduct general debiasing. In particular, the proposed $\underline{\textbf{c}}$ovariance $\underline{\textbf{op}}$timizing (COP) minimizes characterizing features{'} covariance for alleviating the selection and distribution bias and enhances feature representation in the feature space. Furthermore, based on the causal backdoor adjustment, we propose $\\underline{\textbf{v}}$ariance $\underline{\textbf{op}}$timizing (VOP) separates samples in terms of label information and minimizes the variance of each dimension in the feature vectors of the same class label for mitigating the distribution bias further. By applying it to three strong baselines in two widely used datasets, the results demonstrate the effectiveness and generalization of OVO for joint entity and relation extraction tasks. Furthermore, a fine-grained analysis reveals that OVO possesses the capability to mitigate the impact of long-tail distribution. | [
"Ren, Lin",
"Liu, Yongbin",
"Cao, Yixin",
"Ouyang, Chunping"
] | CoVariance-based Causal Debiasing for Entity and Relation Extraction | findings-emnlp.173 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.174.bib | https://aclanthology.org/2023.findings-emnlp.174/ | @inproceedings{liu-etal-2023-multi,
title = "Multi-label and Multi-target Sampling of Machine Annotation for Computational Stance Detection",
author = "Liu, Zhengyuan and
Chieu, Hai Leong and
Chen, Nancy",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.174",
doi = "10.18653/v1/2023.findings-emnlp.174",
pages = "2641--2649",
abstract = "Data collection from manual labeling provides domain-specific and task-aligned supervision for data-driven approaches, and a critical mass of well-annotated resources is required to achieve reasonable performance in natural language processing tasks. However, manual annotations are often challenging to scale up in terms of time and budget, especially when domain knowledge, capturing subtle semantic features, and reasoning steps are needed. In this paper, we investigate the efficacy of leveraging large language models on automated labeling for computational stance detection. We empirically observe that while large language models show strong potential as an alternative to human annotators, their sensitivity to task-specific instructions and their intrinsic biases pose intriguing yet unique challenges in machine annotation. We introduce a multi-label and multi-target sampling strategy to optimize the annotation quality. Experimental results on the benchmark stance detection corpora show that our method can significantly improve performance and learning efficacy.",
}
| Data collection from manual labeling provides domain-specific and task-aligned supervision for data-driven approaches, and a critical mass of well-annotated resources is required to achieve reasonable performance in natural language processing tasks. However, manual annotations are often challenging to scale up in terms of time and budget, especially when domain knowledge, capturing subtle semantic features, and reasoning steps are needed. In this paper, we investigate the efficacy of leveraging large language models on automated labeling for computational stance detection. We empirically observe that while large language models show strong potential as an alternative to human annotators, their sensitivity to task-specific instructions and their intrinsic biases pose intriguing yet unique challenges in machine annotation. We introduce a multi-label and multi-target sampling strategy to optimize the annotation quality. Experimental results on the benchmark stance detection corpora show that our method can significantly improve performance and learning efficacy. | [
"Liu, Zhengyuan",
"Chieu, Hai Leong",
"Chen, Nancy"
] | Multi-label and Multi-target Sampling of Machine Annotation for Computational Stance Detection | findings-emnlp.174 | 2311.04495 | [
"https://github.com/seq-to-mind/Stance_MA"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.175.bib | https://aclanthology.org/2023.findings-emnlp.175/ | @inproceedings{ersoy-etal-2023-languages,
title = "In What Languages are Generative Language Models the Most Formal? Analyzing Formality Distribution across Languages",
author = "Ersoy, As{\i}m and
Vizcarra, Gerson and
Mayeesha, Tahsin and
Muller, Benjamin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.175",
doi = "10.18653/v1/2023.findings-emnlp.175",
pages = "2650--2666",
abstract = "Multilingual generative language models (LMs) are increasingly fluent in a large variety of languages. Trained on the concatenation of corpora in multiple languages, they enable powerful transfer from high-resource languages to low-resource ones. However, it is still unknown what cultural biases are induced in the predictions of these models. In this work, we focus on one language property highly influenced by culture: formality. We analyze the formality distributions of XGLM and BLOOM{'}s predictions, two popular generative multilingual language models, in 5 languages. We classify 1,200 generations per language as formal, informal, or incohesive and measure the impact of the prompt formality on the predictions. Overall, we observe a diversity of behaviors across the models and languages. For instance, XGLM generates informal text in Arabic and Bengali when conditioned with informal prompts, much more than BLOOM. In addition, even though both models are highly biased toward the formal style when prompted neutrally, we find that the models generate a significant amount of informal predictions even when prompted with formal text. We release with this work 6,000 annotated samples, paving the way for future work on the formality of generative multilingual LMs.",
}
| Multilingual generative language models (LMs) are increasingly fluent in a large variety of languages. Trained on the concatenation of corpora in multiple languages, they enable powerful transfer from high-resource languages to low-resource ones. However, it is still unknown what cultural biases are induced in the predictions of these models. In this work, we focus on one language property highly influenced by culture: formality. We analyze the formality distributions of XGLM and BLOOM{'}s predictions, two popular generative multilingual language models, in 5 languages. We classify 1,200 generations per language as formal, informal, or incohesive and measure the impact of the prompt formality on the predictions. Overall, we observe a diversity of behaviors across the models and languages. For instance, XGLM generates informal text in Arabic and Bengali when conditioned with informal prompts, much more than BLOOM. In addition, even though both models are highly biased toward the formal style when prompted neutrally, we find that the models generate a significant amount of informal predictions even when prompted with formal text. We release with this work 6,000 annotated samples, paving the way for future work on the formality of generative multilingual LMs. | [
"Ersoy, As{\\i}m",
"Vizcarra, Gerson",
"Mayeesha, Tahsin",
"Muller, Benjamin"
] | In What Languages are Generative Language Models the Most Formal? Analyzing Formality Distribution across Languages | findings-emnlp.175 | 2302.12299 | [
"https://github.com/asimokby/formality-bias-analysis"
] | https://huggingface.co/papers/2302.12299 | 2 | 0 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.176.bib | https://aclanthology.org/2023.findings-emnlp.176/ | @inproceedings{changpinyo-etal-2023-maxm,
title = "{M}a{XM}: Towards Multilingual Visual Question Answering",
author = "Changpinyo, Soravit and
Xue, Linting and
Yarom, Michal and
Thapliyal, Ashish and
Szpektor, Idan and
Amelot, Julien and
Chen, Xi and
Soricut, Radu",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.176",
doi = "10.18653/v1/2023.findings-emnlp.176",
pages = "2667--2682",
abstract = "Visual Question Answering (VQA) has been primarily studied through the lens of the English language. Yet, tackling VQA in other languages in the same manner would require a considerable amount of resources. In this paper, we propose scalable solutions to multilingual visual question answering (mVQA), on both data and modeling fronts. We first propose a translation-based framework to mVQA data generation that requires much less human annotation efforts than the conventional approach of directly collection questions and answers. Then, we apply our framework to the multilingual captions in the Crossmodal-3600 dataset and develop an efficient annotation protocol to create MaXM, a test-only VQA benchmark in 7 diverse languages. Finally, we develop a simple, lightweight, and effective approach as well as benchmark state-of-the-art English and multilingual VQA models. We hope that our benchmark encourages further research on mVQA.",
}
| Visual Question Answering (VQA) has been primarily studied through the lens of the English language. Yet, tackling VQA in other languages in the same manner would require a considerable amount of resources. In this paper, we propose scalable solutions to multilingual visual question answering (mVQA), on both data and modeling fronts. We first propose a translation-based framework to mVQA data generation that requires much less human annotation efforts than the conventional approach of directly collection questions and answers. Then, we apply our framework to the multilingual captions in the Crossmodal-3600 dataset and develop an efficient annotation protocol to create MaXM, a test-only VQA benchmark in 7 diverse languages. Finally, we develop a simple, lightweight, and effective approach as well as benchmark state-of-the-art English and multilingual VQA models. We hope that our benchmark encourages further research on mVQA. | [
"Changpinyo, Soravit",
"Xue, Linting",
"Yarom, Michal",
"Thapliyal, Ashish",
"Szpektor, Idan",
"Amelot, Julien",
"Chen, Xi",
"Soricut, Radu"
] | MaXM: Towards Multilingual Visual Question Answering | findings-emnlp.176 | 2209.05401 | [
"https://github.com/google-research-datasets/maxm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.177.bib | https://aclanthology.org/2023.findings-emnlp.177/ | @inproceedings{han-etal-2023-efficient,
title = "Efficient Latent Variable Modeling for Knowledge-Grounded Dialogue Generation",
author = "Han, Gunsoo and
Jo, Daejin and
Nam, Daniel and
Yoon, Eunseop and
Kwon, Taehwan and
Rho, Seungeun and
On, Kyoung-Woon and
Yoo, Chang and
Kim, Sungwoong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.177",
doi = "10.18653/v1/2023.findings-emnlp.177",
pages = "2683--2702",
abstract = "Knowledge-grounded dialogue generation requires first retrieving appropriate external knowledge based on a conversational context and then generating a response grounded on the retrieved knowledge. In general, these two sequential modules, a knowledge retriever and a response generator, have been separately trained in a supervised manner. However, obtaining intermediate labels of the ground-truth knowledge is expensive, especially in open-domain conversations. Latent variable modeling avoids this need for the labels. In this paper, we propose an efficient algorithm for this latent variable modeling that is able to leverage a large amount of dialogue data. Rather than directly training the complex retriever, we adapt a query generator with an off-the-shelf retriever, and the query generator and response generator are simultaneously trained over the latent variable of query. Moreover, we employ lower bound of the evidence as a training objective and modify it to robustly perform the joint training. Experimental results on diverse knowledge-grounded dialogue datasets show that the proposed algorithm significantly outperforms the supervised learning algorithm even without the use of the annotated knowledge while maintaining efficiency and scalability.",
}
| Knowledge-grounded dialogue generation requires first retrieving appropriate external knowledge based on a conversational context and then generating a response grounded on the retrieved knowledge. In general, these two sequential modules, a knowledge retriever and a response generator, have been separately trained in a supervised manner. However, obtaining intermediate labels of the ground-truth knowledge is expensive, especially in open-domain conversations. Latent variable modeling avoids this need for the labels. In this paper, we propose an efficient algorithm for this latent variable modeling that is able to leverage a large amount of dialogue data. Rather than directly training the complex retriever, we adapt a query generator with an off-the-shelf retriever, and the query generator and response generator are simultaneously trained over the latent variable of query. Moreover, we employ lower bound of the evidence as a training objective and modify it to robustly perform the joint training. Experimental results on diverse knowledge-grounded dialogue datasets show that the proposed algorithm significantly outperforms the supervised learning algorithm even without the use of the annotated knowledge while maintaining efficiency and scalability. | [
"Han, Gunsoo",
"Jo, Daejin",
"Nam, Daniel",
"Yoon, Eunseop",
"Kwon, Taehwan",
"Rho, Seungeun",
"On, Kyoung-Woon",
"Yoo, Chang",
"Kim, Sungwoong"
] | Efficient Latent Variable Modeling for Knowledge-Grounded Dialogue Generation | findings-emnlp.177 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.178.bib | https://aclanthology.org/2023.findings-emnlp.178/ | @inproceedings{liu-etal-2023-ask,
title = "Ask To The Point: Open-Domain Entity-Centric Question Generation",
author = "Liu, Yuxiang and
Huang, Jie and
Chang, Kevin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.178",
doi = "10.18653/v1/2023.findings-emnlp.178",
pages = "2703--2716",
abstract = "We introduce a new task called *entity-centric question generation* (ECQG), motivated by real-world applications such as topic-specific learning, assisted reading, and fact-checking. The task aims to generate questions from an entity perspective. To solve ECQG, we propose a coherent PLM-based framework GenCONE with two novel modules: content focusing and question verification. The content focusing module first identifies a focus as {``}what to ask{''} to form draft questions, and the question verification module refines the questions afterwards by verifying the answerability. We also construct a large-scale open-domain dataset from SQuAD to support this task. Our extensive experiments demonstrate that GenCONE significantly and consistently outperforms various baselines, and two modules are effective and complementary in generating high-quality questions.",
}
| We introduce a new task called *entity-centric question generation* (ECQG), motivated by real-world applications such as topic-specific learning, assisted reading, and fact-checking. The task aims to generate questions from an entity perspective. To solve ECQG, we propose a coherent PLM-based framework GenCONE with two novel modules: content focusing and question verification. The content focusing module first identifies a focus as {``}what to ask{''} to form draft questions, and the question verification module refines the questions afterwards by verifying the answerability. We also construct a large-scale open-domain dataset from SQuAD to support this task. Our extensive experiments demonstrate that GenCONE significantly and consistently outperforms various baselines, and two modules are effective and complementary in generating high-quality questions. | [
"Liu, Yuxiang",
"Huang, Jie",
"Chang, Kevin"
] | Ask To The Point: Open-Domain Entity-Centric Question Generation | findings-emnlp.178 | 2310.14126 | [
"https://github.com/liuyuxiang512/ecqg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.179.bib | https://aclanthology.org/2023.findings-emnlp.179/ | @inproceedings{wang-etal-2023-self-prompted,
title = "Self-prompted Chain-of-Thought on Large Language Models for Open-domain Multi-hop Reasoning",
author = "Wang, Jinyuan and
Li, Junlong and
Zhao, Hai",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.179",
doi = "10.18653/v1/2023.findings-emnlp.179",
pages = "2717--2731",
abstract = "In open-domain question-answering (ODQA), most existing questions require single-hop reasoning on commonsense. To further extend this task, we officially introduce open-domain multi-hop reasoning (ODMR) by answering multi-hop questions with explicit reasoning steps in open-domain setting. Recently, large language models (LLMs) have found significant utility in facilitating ODQA without external corpus. Furthermore, chain-of-thought (CoT) prompting boosts the reasoning capability of LLMs to a greater extent with manual or automated paradigms. However, existing automated methods lack of quality assurance, while manual approaches suffer from limited scalability and poor diversity, hindering the capabilities of LLMs. In this paper, we propose Self-prompted Chain-of-Thought (SP-CoT), an automated framework to mass-produce high quality CoTs of LLMs, by LLMs and for LLMs. SP-CoT introduces an automated generation pipeline of high quality ODMR datasets, an adaptive sampler for in-context CoT selection and self-prompted inference via in-context learning. Extensive experiments on four multi-hop question-answering benchmarks show that our proposed SP-CoT not only significantly surpasses the previous SOTA methods on large-scale (175B) LLMs, but also nearly doubles the zero-shot performance of small-scale (13B) LLMs. Further analysis reveals the remarkable capability of SP-CoT to elicit direct and concise intermediate reasoning steps by recalling {\textasciitilde}50{\%} of intermediate answers on MuSiQue-Ans dataset.",
}
| In open-domain question-answering (ODQA), most existing questions require single-hop reasoning on commonsense. To further extend this task, we officially introduce open-domain multi-hop reasoning (ODMR) by answering multi-hop questions with explicit reasoning steps in open-domain setting. Recently, large language models (LLMs) have found significant utility in facilitating ODQA without external corpus. Furthermore, chain-of-thought (CoT) prompting boosts the reasoning capability of LLMs to a greater extent with manual or automated paradigms. However, existing automated methods lack of quality assurance, while manual approaches suffer from limited scalability and poor diversity, hindering the capabilities of LLMs. In this paper, we propose Self-prompted Chain-of-Thought (SP-CoT), an automated framework to mass-produce high quality CoTs of LLMs, by LLMs and for LLMs. SP-CoT introduces an automated generation pipeline of high quality ODMR datasets, an adaptive sampler for in-context CoT selection and self-prompted inference via in-context learning. Extensive experiments on four multi-hop question-answering benchmarks show that our proposed SP-CoT not only significantly surpasses the previous SOTA methods on large-scale (175B) LLMs, but also nearly doubles the zero-shot performance of small-scale (13B) LLMs. Further analysis reveals the remarkable capability of SP-CoT to elicit direct and concise intermediate reasoning steps by recalling {\textasciitilde}50{\%} of intermediate answers on MuSiQue-Ans dataset. | [
"Wang, Jinyuan",
"Li, Junlong",
"Zhao, Hai"
] | Self-prompted Chain-of-Thought on Large Language Models for Open-domain Multi-hop Reasoning | findings-emnlp.179 | 2310.13552 | [
"https://github.com/noewangjy/sp-cot"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.180.bib | https://aclanthology.org/2023.findings-emnlp.180/ | @inproceedings{chen-etal-2023-case,
title = "{CASE}: Commonsense-Augmented Score with an Expanded Answer Space",
author = "Chen, Wenkai and
Ravi, Sahithya and
Shwartz, Vered",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.180",
doi = "10.18653/v1/2023.findings-emnlp.180",
pages = "2732--2744",
abstract = "LLMs have demonstrated impressive zero-shot performance on NLP tasks thanks to the knowledge they acquired in their training. In multiple-choice QA tasks, the LM probabilities are used as an imperfect measure of the plausibility of each answer choice. One of the major limitations of the basic score is that it treats all words as equally important. We propose CASE, a Commonsense-Augmented Score with an Expanded Answer Space. CASE addresses this limitation by assigning importance weights for individual words based on their semantic relations to other words in the input. The dynamic weighting approach outperforms basic LM scores, not only because it reduces noise from unimportant words, but also because it informs the model of implicit commonsense knowledge that may be useful for answering the question. We then also follow prior work in expanding the answer space by generating lexically-divergent answers that are conceptually-similar to the choices. When combined with answer space expansion, our method outperforms strong baselines on 5 commonsense benchmarks. We further show these two approaches are complementary and may be especially beneficial when using smaller LMs.",
}
| LLMs have demonstrated impressive zero-shot performance on NLP tasks thanks to the knowledge they acquired in their training. In multiple-choice QA tasks, the LM probabilities are used as an imperfect measure of the plausibility of each answer choice. One of the major limitations of the basic score is that it treats all words as equally important. We propose CASE, a Commonsense-Augmented Score with an Expanded Answer Space. CASE addresses this limitation by assigning importance weights for individual words based on their semantic relations to other words in the input. The dynamic weighting approach outperforms basic LM scores, not only because it reduces noise from unimportant words, but also because it informs the model of implicit commonsense knowledge that may be useful for answering the question. We then also follow prior work in expanding the answer space by generating lexically-divergent answers that are conceptually-similar to the choices. When combined with answer space expansion, our method outperforms strong baselines on 5 commonsense benchmarks. We further show these two approaches are complementary and may be especially beneficial when using smaller LMs. | [
"Chen, Wenkai",
"Ravi, Sahithya",
"Shwartz, Vered"
] | CASE: Commonsense-Augmented Score with an Expanded Answer Space | findings-emnlp.180 | 2311.01684 | [
"https://github.com/wk-chen/commonsense-augmented-score-with-an-expanded-answer-space"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.181.bib | https://aclanthology.org/2023.findings-emnlp.181/ | @inproceedings{li-etal-2023-grenade,
title = "{GRENADE}: Graph-Centric Language Model for Self-Supervised Representation Learning on Text-Attributed Graphs",
author = "Li, Yichuan and
Ding, Kaize and
Lee, Kyumin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.181",
doi = "10.18653/v1/2023.findings-emnlp.181",
pages = "2745--2757",
abstract = "Self-supervised representation learning on text-attributed graphs, which aims to create expressive and generalizable representations for various downstream tasks, has received increasing research attention lately. However, existing methods either struggle to capture the full extent of structural context information or rely on task-specific training labels, which largely hampers their effectiveness and generalizability in practice. To solve the problem of self-supervised representation learning on text-attributed graphs, we develop a novel Graph-Centric Language model {--} GRENADE. Specifically, GRENADE harnesses the synergy of both pre-trained language model and graph neural network by optimizing with two specialized self-supervised learning algorithms: graph-centric contrastive learning and graph-centric knowledge alignment. The proposed graph-centric self-supervised learning algorithms effectively help GRENADE to capture informative textual semantics as well as structural context information on text-attributed graphs. Through extensive experiments, GRENADE shows its superiority over state-of-the-art methods.",
}
| Self-supervised representation learning on text-attributed graphs, which aims to create expressive and generalizable representations for various downstream tasks, has received increasing research attention lately. However, existing methods either struggle to capture the full extent of structural context information or rely on task-specific training labels, which largely hampers their effectiveness and generalizability in practice. To solve the problem of self-supervised representation learning on text-attributed graphs, we develop a novel Graph-Centric Language model {--} GRENADE. Specifically, GRENADE harnesses the synergy of both pre-trained language model and graph neural network by optimizing with two specialized self-supervised learning algorithms: graph-centric contrastive learning and graph-centric knowledge alignment. The proposed graph-centric self-supervised learning algorithms effectively help GRENADE to capture informative textual semantics as well as structural context information on text-attributed graphs. Through extensive experiments, GRENADE shows its superiority over state-of-the-art methods. | [
"Li, Yichuan",
"Ding, Kaize",
"Lee, Kyumin"
] | GRENADE: Graph-Centric Language Model for Self-Supervised Representation Learning on Text-Attributed Graphs | findings-emnlp.181 | 2310.15109 | [
"https://github.com/bigheiniu/grenade"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.182.bib | https://aclanthology.org/2023.findings-emnlp.182/ | @inproceedings{mckenna-etal-2023-sources,
title = "Sources of Hallucination by Large Language Models on Inference Tasks",
author = "McKenna, Nick and
Li, Tianyi and
Cheng, Liang and
Hosseini, Mohammad and
Johnson, Mark and
Steedman, Mark",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.182",
doi = "10.18653/v1/2023.findings-emnlp.182",
pages = "2758--2774",
abstract = "Large Language Models (LLMs) are claimed to be capable of Natural Language Inference (NLI), necessary for applied tasks like question answering and summarization. We present a series of behavioral studies on several LLM families (LLaMA, GPT-3.5, and PaLM) which probe their behavior using controlled experiments. We establish two biases originating from pretraining which predict much of their behavior, and show that these are major sources of hallucination in generative LLMs. First, memorization at the level of sentences: we show that, regardless of the premise, models falsely label NLI test samples as entailing when the hypothesis is attested in training data, and that entities are used as {``}indices{'} to access the memorized data. Second, statistical patterns of usage learned at the level of corpora: we further show a similar effect when the premise predicate is less frequent than that of the hypothesis in the training data, a bias following from previous studies. We demonstrate that LLMs perform significantly worse on NLI test samples which do not conform to these biases than those which do, and we offer these as valuable controls for future LLM evaluation.",
}
| Large Language Models (LLMs) are claimed to be capable of Natural Language Inference (NLI), necessary for applied tasks like question answering and summarization. We present a series of behavioral studies on several LLM families (LLaMA, GPT-3.5, and PaLM) which probe their behavior using controlled experiments. We establish two biases originating from pretraining which predict much of their behavior, and show that these are major sources of hallucination in generative LLMs. First, memorization at the level of sentences: we show that, regardless of the premise, models falsely label NLI test samples as entailing when the hypothesis is attested in training data, and that entities are used as {``}indices{'} to access the memorized data. Second, statistical patterns of usage learned at the level of corpora: we further show a similar effect when the premise predicate is less frequent than that of the hypothesis in the training data, a bias following from previous studies. We demonstrate that LLMs perform significantly worse on NLI test samples which do not conform to these biases than those which do, and we offer these as valuable controls for future LLM evaluation. | [
"McKenna, Nick",
"Li, Tianyi",
"Cheng, Liang",
"Hosseini, Mohammad",
"Johnson, Mark",
"Steedman, Mark"
] | Sources of Hallucination by Large Language Models on Inference Tasks | findings-emnlp.182 | 2305.14552 | [
"https://github.com/teddy-li/llm-nli-analysis"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.183.bib | https://aclanthology.org/2023.findings-emnlp.183/ | @inproceedings{zhang-etal-2023-efficient-long,
title = "Efficient Long-Range Transformers: You Need to Attend More, but Not Necessarily at Every Layer",
author = "Zhang, Qingru and
Ram, Dhananjay and
Hawkins, Cole and
Zha, Sheng and
Zhao, Tuo",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.183",
doi = "10.18653/v1/2023.findings-emnlp.183",
pages = "2775--2786",
abstract = "Pretrained transformer models have demonstrated remarkable performance across various natural language processing tasks. These models leverage the attention mechanism to capture long- and short-range dependencies in the sequence. However, the (full) attention mechanism incurs high computational cost {--} quadratic in the sequence length, which is not affordable in tasks with long sequences, e.g., inputs with 8k tokens. Although sparse attention can be used to improve computational efficiency, as suggested in existing work, it has limited modeling capacity and often fails to capture complicated dependencies in long sequences. To tackle this challenge, we propose MASFormer, an easy-to-implement transformer variant with mixed attention spans. Specifically, MASFormer is equipped with full attention to capture long-range dependencies, but only at a small number of layers. For the remaining layers, MASformer only employs sparse attention to capture short-range dependencies. Our experiments on natural language modeling and generation tasks show that a decoder-only MASFormer model of 1.3B parameters can achieve competitive performance to vanilla transformers with full attention while significantly reducing computational cost (up to 75{\%}). Additionally, we investigate the effectiveness of continual training with long sequence data and how sequence length impacts downstream generation performance, which may be of independent interest.",
}
| Pretrained transformer models have demonstrated remarkable performance across various natural language processing tasks. These models leverage the attention mechanism to capture long- and short-range dependencies in the sequence. However, the (full) attention mechanism incurs high computational cost {--} quadratic in the sequence length, which is not affordable in tasks with long sequences, e.g., inputs with 8k tokens. Although sparse attention can be used to improve computational efficiency, as suggested in existing work, it has limited modeling capacity and often fails to capture complicated dependencies in long sequences. To tackle this challenge, we propose MASFormer, an easy-to-implement transformer variant with mixed attention spans. Specifically, MASFormer is equipped with full attention to capture long-range dependencies, but only at a small number of layers. For the remaining layers, MASformer only employs sparse attention to capture short-range dependencies. Our experiments on natural language modeling and generation tasks show that a decoder-only MASFormer model of 1.3B parameters can achieve competitive performance to vanilla transformers with full attention while significantly reducing computational cost (up to 75{\%}). Additionally, we investigate the effectiveness of continual training with long sequence data and how sequence length impacts downstream generation performance, which may be of independent interest. | [
"Zhang, Qingru",
"Ram, Dhananjay",
"Hawkins, Cole",
"Zha, Sheng",
"Zhao, Tuo"
] | Efficient Long-Range Transformers: You Need to Attend More, but Not Necessarily at Every Layer | findings-emnlp.183 | 2310.12442 | [
""
] | https://huggingface.co/papers/2310.12442 | 0 | 1 | 1 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.184.bib | https://aclanthology.org/2023.findings-emnlp.184/ | @inproceedings{li-etal-2023-prompting,
title = "Prompting {C}hat{GPT} in {MNER}: Enhanced Multimodal Named Entity Recognition with Auxiliary Refined Knowledge",
author = "Li, Jinyuan and
Li, Han and
Pan, Zhuo and
Sun, Di and
Wang, Jiahao and
Zhang, Wenkun and
Pan, Gang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.184",
doi = "10.18653/v1/2023.findings-emnlp.184",
pages = "2787--2802",
abstract = "Multimodal Named Entity Recognition (MNER) on social media aims to enhance textual entity prediction by incorporating image-based clues. Existing studies mainly focus on maximizing the utilization of pertinent image information or incorporating external knowledge from explicit knowledge bases. However, these methods either neglect the necessity of providing the model with external knowledge, or encounter issues of high redundancy in the retrieved knowledge. In this paper, we present PGIM {---} a two-stage framework that aims to leverage ChatGPT as an implicit knowledge base and enable it to heuristically generate auxiliary knowledge for more efficient entity prediction. Specifically, PGIM contains a Multimodal Similar Example Awareness module that selects suitable examples from a small number of predefined artificial samples. These examples are then integrated into a formatted prompt template tailored to the MNER and guide ChatGPT to generate auxiliary refined knowledge. Finally, the acquired knowledge is integrated with the original text and fed into a downstream model for further processing. Extensive experiments show that PGIM outperforms state-of-the-art methods on two classic MNER datasets and exhibits a stronger robustness and generalization capability.",
}
| Multimodal Named Entity Recognition (MNER) on social media aims to enhance textual entity prediction by incorporating image-based clues. Existing studies mainly focus on maximizing the utilization of pertinent image information or incorporating external knowledge from explicit knowledge bases. However, these methods either neglect the necessity of providing the model with external knowledge, or encounter issues of high redundancy in the retrieved knowledge. In this paper, we present PGIM {---} a two-stage framework that aims to leverage ChatGPT as an implicit knowledge base and enable it to heuristically generate auxiliary knowledge for more efficient entity prediction. Specifically, PGIM contains a Multimodal Similar Example Awareness module that selects suitable examples from a small number of predefined artificial samples. These examples are then integrated into a formatted prompt template tailored to the MNER and guide ChatGPT to generate auxiliary refined knowledge. Finally, the acquired knowledge is integrated with the original text and fed into a downstream model for further processing. Extensive experiments show that PGIM outperforms state-of-the-art methods on two classic MNER datasets and exhibits a stronger robustness and generalization capability. | [
"Li, Jinyuan",
"Li, Han",
"Pan, Zhuo",
"Sun, Di",
"Wang, Jiahao",
"Zhang, Wenkun",
"Pan, Gang"
] | Prompting ChatGPT in MNER: Enhanced Multimodal Named Entity Recognition with Auxiliary Refined Knowledge | findings-emnlp.184 | 2305.12212 | [
"https://github.com/jinyuanli0012/pgim"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.185.bib | https://aclanthology.org/2023.findings-emnlp.185/ | @inproceedings{gur-etal-2023-understanding,
title = "Understanding {HTML} with Large Language Models",
author = "Gur, Izzeddin and
Nachum, Ofir and
Miao, Yingjie and
Safdari, Mustafa and
Huang, Austin and
Chowdhery, Aakanksha and
Narang, Sharan and
Fiedel, Noah and
Faust, Aleksandra",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.185",
doi = "10.18653/v1/2023.findings-emnlp.185",
pages = "2803--2821",
abstract = "Large language models (LLMs) have shown exceptional performance on a variety of natural language tasks. Yet, their capabilities for HTML understanding {--} i.e., parsing the raw HTML of a webpage, with applications to automation of web-based tasks, crawling, and browser-assisted retrieval {--} have not been fully explored. We contribute HTML understanding models (fine-tuned LLMs) and an in-depth analysis of their capabilities under three tasks: (i) Semantic Classification of HTML elements, (ii) Description Generation for HTML inputs, and (iii) Autonomous Web Navigation of HTML pages. While previous work has developed dedicated architectures and training procedures for HTML understanding, we show that LLMs pretrained on standard natural language corpora transfer remarkably well to HTML understanding tasks. For instance, when fine-tuned on data from the MiniWoB benchmark, LLMs successfully complete 50{\%} more tasks using 192x less data compared to the previous best supervised model. We create and open-source a large-scale HTML dataset distilled and auto-labeled from CommonCrawl",
}
| Large language models (LLMs) have shown exceptional performance on a variety of natural language tasks. Yet, their capabilities for HTML understanding {--} i.e., parsing the raw HTML of a webpage, with applications to automation of web-based tasks, crawling, and browser-assisted retrieval {--} have not been fully explored. We contribute HTML understanding models (fine-tuned LLMs) and an in-depth analysis of their capabilities under three tasks: (i) Semantic Classification of HTML elements, (ii) Description Generation for HTML inputs, and (iii) Autonomous Web Navigation of HTML pages. While previous work has developed dedicated architectures and training procedures for HTML understanding, we show that LLMs pretrained on standard natural language corpora transfer remarkably well to HTML understanding tasks. For instance, when fine-tuned on data from the MiniWoB benchmark, LLMs successfully complete 50{\%} more tasks using 192x less data compared to the previous best supervised model. We create and open-source a large-scale HTML dataset distilled and auto-labeled from CommonCrawl | [
"Gur, Izzeddin",
"Nachum, Ofir",
"Miao, Yingjie",
"Safdari, Mustafa",
"Huang, Austin",
"Chowdhery, Aakanksha",
"Narang, Sharan",
"Fiedel, Noah",
"Faust, Aleks",
"ra"
] | Understanding HTML with Large Language Models | findings-emnlp.185 | 2210.03945 | [
""
] | https://huggingface.co/papers/2210.03945 | 0 | 1 | 0 | 9 | [] | [
"EricWiener/llm4html-descgen"
] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.186.bib | https://aclanthology.org/2023.findings-emnlp.186/ | @inproceedings{yeo-jaidka-2023-peace,
title = "The {PEACE}-Reviews dataset: Modeling Cognitive Appraisals in Emotion Text Analysis",
author = "Yeo, Gerard and
Jaidka, Kokil",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.186",
doi = "10.18653/v1/2023.findings-emnlp.186",
pages = "2822--2840",
abstract = "Cognitive appraisal plays a pivotal role in deciphering emotions. Recent studies have delved into its significance, yet the interplay between various forms of cognitive appraisal and specific emotions, such as joy and anger, remains an area of exploration in consumption contexts. Our research introduces the PEACE-Reviews dataset, a unique compilation of annotated autobiographical accounts where individuals detail their emotional and appraisal experiences during interactions with personally significant products or services. Focusing on the inherent variability in consumer experiences, this dataset offers an in-depth analysis of participants{'} psychological traits, their evaluative feedback on purchases, and the resultant emotions. Notably, the PEACE-Reviews dataset encompasses emotion, cognition, individual traits, and demographic data. We also introduce preliminary models that predict certain features based on the autobiographical narratives.",
}
| Cognitive appraisal plays a pivotal role in deciphering emotions. Recent studies have delved into its significance, yet the interplay between various forms of cognitive appraisal and specific emotions, such as joy and anger, remains an area of exploration in consumption contexts. Our research introduces the PEACE-Reviews dataset, a unique compilation of annotated autobiographical accounts where individuals detail their emotional and appraisal experiences during interactions with personally significant products or services. Focusing on the inherent variability in consumer experiences, this dataset offers an in-depth analysis of participants{'} psychological traits, their evaluative feedback on purchases, and the resultant emotions. Notably, the PEACE-Reviews dataset encompasses emotion, cognition, individual traits, and demographic data. We also introduce preliminary models that predict certain features based on the autobiographical narratives. | [
"Yeo, Gerard",
"Jaidka, Kokil"
] | The PEACE-Reviews dataset: Modeling Cognitive Appraisals in Emotion Text Analysis | findings-emnlp.186 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.187.bib | https://aclanthology.org/2023.findings-emnlp.187/ | @inproceedings{ye-etal-2023-ureader,
title = "{UR}eader: Universal {OCR}-free Visually-situated Language Understanding with Multimodal Large Language Model",
author = "Ye, Jiabo and
Hu, Anwen and
Xu, Haiyang and
Ye, Qinghao and
Yan, Ming and
Xu, Guohai and
Li, Chenliang and
Tian, Junfeng and
Qian, Qi and
Zhang, Ji and
Jin, Qin and
He, Liang and
Lin, Xin and
Huang, Fei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.187",
doi = "10.18653/v1/2023.findings-emnlp.187",
pages = "2841--2858",
abstract = "Text is ubiquitous in our visual world, conveying crucial information, such as in documents, websites, and everyday photographs. In this work, we propose UReader, a first exploration of universal OCR-free visually-situated language understanding based on the Multimodal Large Language Model (MLLM). By leveraging the shallow text recognition ability of the MLLM, we only finetuned 1.2{\%} parameters and the training cost is much lower than previous work following domain-specific pretraining and finetuning paradigms. Concretely, UReader is jointly finetuned on a wide range of Visually-situated Language Understanding tasks via a unified instruction format. To enhance the visual text and semantic understanding, we further apply two auxiliary tasks with the same format, namely text reading and key points generation tasks. We design a shape-adaptive cropping module before the encoder-decoder architecture of MLLM to leverage the frozen low-resolution vision encoder for processing high-resolution images. Without downstream finetuning, our single model achieves state-of-the-art ocr-free performance in 8 out of 10 visually-situated language understanding tasks, across 5 domains: documents, tables, charts, natural images, and webpage screenshots. Codes and instruction-tuning datasets will be released.",
}
| Text is ubiquitous in our visual world, conveying crucial information, such as in documents, websites, and everyday photographs. In this work, we propose UReader, a first exploration of universal OCR-free visually-situated language understanding based on the Multimodal Large Language Model (MLLM). By leveraging the shallow text recognition ability of the MLLM, we only finetuned 1.2{\%} parameters and the training cost is much lower than previous work following domain-specific pretraining and finetuning paradigms. Concretely, UReader is jointly finetuned on a wide range of Visually-situated Language Understanding tasks via a unified instruction format. To enhance the visual text and semantic understanding, we further apply two auxiliary tasks with the same format, namely text reading and key points generation tasks. We design a shape-adaptive cropping module before the encoder-decoder architecture of MLLM to leverage the frozen low-resolution vision encoder for processing high-resolution images. Without downstream finetuning, our single model achieves state-of-the-art ocr-free performance in 8 out of 10 visually-situated language understanding tasks, across 5 domains: documents, tables, charts, natural images, and webpage screenshots. Codes and instruction-tuning datasets will be released. | [
"Ye, Jiabo",
"Hu, Anwen",
"Xu, Haiyang",
"Ye, Qinghao",
"Yan, Ming",
"Xu, Guohai",
"Li, Chenliang",
"Tian, Junfeng",
"Qian, Qi",
"Zhang, Ji",
"Jin, Qin",
"He, Liang",
"Lin, Xin",
"Huang, Fei"
] | UReader: Universal OCR-free Visually-situated Language Understanding with Multimodal Large Language Model | findings-emnlp.187 | 2310.05126 | [
"https://github.com/lukeforeveryoung/ureader"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.188.bib | https://aclanthology.org/2023.findings-emnlp.188/ | @inproceedings{shen-etal-2023-loose,
title = "Loose lips sink ships: Mitigating Length Bias in Reinforcement Learning from Human Feedback",
author = "Shen, Wei and
Zheng, Rui and
Zhan, Wenyu and
Zhao, Jun and
Dou, Shihan and
Gui, Tao and
Zhang, Qi and
Huang, Xuanjing",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.188",
doi = "10.18653/v1/2023.findings-emnlp.188",
pages = "2859--2873",
abstract = "Reinforcement learning from human feedback serves as a crucial bridge, aligning large language models with human and societal values. This alignment requires a vast corpus of human feedback to learn a reward model, which is subsequently used to finetune language models. However, we have identified that the reward model often finds shortcuts to bypass its intended objectives, misleadingly assuming that humans prefer longer responses. The emergence of length bias often induces the model to favor longer outputs, yet it doesn{'}t equate to an increase in helpful information within these outputs. In this paper, we propose an innovative solution, applying the Product-of-Experts (PoE) technique to separate reward modeling from the influence of sequence length. In our framework, the main expert concentrates on understanding human intents, while the biased expert targets the identification and capture of length bias. To further enhance the learning of bias, we introduce perturbations into the bias-focused expert, disrupting the flow of semantic information. Experimental results validate the effectiveness of our approach, indicating that language model performance is improved, irrespective of sequence length.",
}
| Reinforcement learning from human feedback serves as a crucial bridge, aligning large language models with human and societal values. This alignment requires a vast corpus of human feedback to learn a reward model, which is subsequently used to finetune language models. However, we have identified that the reward model often finds shortcuts to bypass its intended objectives, misleadingly assuming that humans prefer longer responses. The emergence of length bias often induces the model to favor longer outputs, yet it doesn{'}t equate to an increase in helpful information within these outputs. In this paper, we propose an innovative solution, applying the Product-of-Experts (PoE) technique to separate reward modeling from the influence of sequence length. In our framework, the main expert concentrates on understanding human intents, while the biased expert targets the identification and capture of length bias. To further enhance the learning of bias, we introduce perturbations into the bias-focused expert, disrupting the flow of semantic information. Experimental results validate the effectiveness of our approach, indicating that language model performance is improved, irrespective of sequence length. | [
"Shen, Wei",
"Zheng, Rui",
"Zhan, Wenyu",
"Zhao, Jun",
"Dou, Shihan",
"Gui, Tao",
"Zhang, Qi",
"Huang, Xuanjing"
] | Loose lips sink ships: Mitigating Length Bias in Reinforcement Learning from Human Feedback | findings-emnlp.188 | 2310.05199 | [
""
] | https://huggingface.co/papers/2310.05199 | 1 | 1 | 0 | 8 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.189.bib | https://aclanthology.org/2023.findings-emnlp.189/ | @inproceedings{wang-etal-2023-filling,
title = "Filling the Image Information Gap for {VQA}: Prompting Large Language Models to Proactively Ask Questions",
author = "Wang, Ziyue and
Chen, Chi and
Li, Peng and
Liu, Yang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.189",
doi = "10.18653/v1/2023.findings-emnlp.189",
pages = "2874--2890",
abstract = "Large Language Models (LLMs) demonstrate impressive reasoning ability and the maintenance of world knowledge not only in natural language tasks, but also in some vision-language tasks such as open-domain knowledge-based visual question answering (OK-VQA). As images are invisible to LLMs, researchers convert images to text to engage LLMs into the visual question reasoning procedure. This leads to discrepancies between images and their textual representations presented to LLMs, which consequently impedes final reasoning performance. To fill the information gap and better leverage the reasoning capability, we design a framework that enables LLMs to proactively ask relevant questions to unveil more details in the image, along with filters for refining the generated information. We validate our idea on OK-VQA and A-OKVQA. Our method continuously boosts the performance of baselines methods by an average gain of 2.15{\%} on OK-VQA, and achieves consistent improvements across different LLMs.",
}
| Large Language Models (LLMs) demonstrate impressive reasoning ability and the maintenance of world knowledge not only in natural language tasks, but also in some vision-language tasks such as open-domain knowledge-based visual question answering (OK-VQA). As images are invisible to LLMs, researchers convert images to text to engage LLMs into the visual question reasoning procedure. This leads to discrepancies between images and their textual representations presented to LLMs, which consequently impedes final reasoning performance. To fill the information gap and better leverage the reasoning capability, we design a framework that enables LLMs to proactively ask relevant questions to unveil more details in the image, along with filters for refining the generated information. We validate our idea on OK-VQA and A-OKVQA. Our method continuously boosts the performance of baselines methods by an average gain of 2.15{\%} on OK-VQA, and achieves consistent improvements across different LLMs. | [
"Wang, Ziyue",
"Chen, Chi",
"Li, Peng",
"Liu, Yang"
] | Filling the Image Information Gap for VQA: Prompting Large Language Models to Proactively Ask Questions | findings-emnlp.189 | 2311.11598 | [
"https://github.com/thunlp-mt/fiig"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.190.bib | https://aclanthology.org/2023.findings-emnlp.190/ | @inproceedings{lu-etal-2023-take,
title = "Take a Closer Look at Multilinguality! Improve Multilingual Pre-Training Using Monolingual Corpora Only",
author = "Lu, Jinliang and
Lu, Yu and
Zhang, Jiajun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.190",
doi = "10.18653/v1/2023.findings-emnlp.190",
pages = "2891--2907",
abstract = "Recent studies have revealed the remarkable cross-lingual capability of multilingual pre-trained language models (mPLMs), even when pre-trained without parallel corpora (mono-mPLMs). Intuitively, semantic alignments may be the reason behind such capability but remain under-explored. In this work, we investigate the alignment properties from the token perspective in mono-mPLMs and find that the alignments correspond to the geometric similarity of embedding space across different languages. Nevertheless, mono-mPLMs tend to damage this geometric similarity at the higher layers due to the lack of cross-lingual interactions, thus limiting their cross-lingual transfer capabilities. To address this issue, we introduce token-level and semantic-level code-switched masked language modeling, employing the self-induced token alignments to explicitly improve cross-lingual interactions over layers of mono-mPLMs without relying on parallel sentences. We evaluate our method on various natural language understanding tasks and unsupervised machine translation tasks. The results demonstrate that our methods outperform the strong baselines and achieve comparable performance with mPLMs trained with parallel corpora.",
}
| Recent studies have revealed the remarkable cross-lingual capability of multilingual pre-trained language models (mPLMs), even when pre-trained without parallel corpora (mono-mPLMs). Intuitively, semantic alignments may be the reason behind such capability but remain under-explored. In this work, we investigate the alignment properties from the token perspective in mono-mPLMs and find that the alignments correspond to the geometric similarity of embedding space across different languages. Nevertheless, mono-mPLMs tend to damage this geometric similarity at the higher layers due to the lack of cross-lingual interactions, thus limiting their cross-lingual transfer capabilities. To address this issue, we introduce token-level and semantic-level code-switched masked language modeling, employing the self-induced token alignments to explicitly improve cross-lingual interactions over layers of mono-mPLMs without relying on parallel sentences. We evaluate our method on various natural language understanding tasks and unsupervised machine translation tasks. The results demonstrate that our methods outperform the strong baselines and achieve comparable performance with mPLMs trained with parallel corpora. | [
"Lu, Jinliang",
"Lu, Yu",
"Zhang, Jiajun"
] | Take a Closer Look at Multilinguality! Improve Multilingual Pre-Training Using Monolingual Corpora Only | findings-emnlp.190 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.191.bib | https://aclanthology.org/2023.findings-emnlp.191/ | @inproceedings{liu-etal-2023-logicot,
title = "{L}ogi{C}o{T}: Logical Chain-of-Thought Instruction Tuning",
author = "Liu, Hanmeng and
Teng, Zhiyang and
Cui, Leyang and
Zhang, Chaoli and
Zhou, Qiji and
Zhang, Yue",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.191",
doi = "10.18653/v1/2023.findings-emnlp.191",
pages = "2908--2921",
abstract = "Generative Pre-trained Transformer 4 (GPT-4) demonstrates impressive chain-of-thought reasoning ability. Recent work on self-instruction tuning, such as Alpaca, has focused on enhancing the general proficiency of models. These instructions enable the model to achieve performance comparable to GPT-3.5 on general tasks like open-domain text generation and paraphrasing. However, they fall short of helping the model handle complex reasoning tasks. To bridge the gap, this paper presents LogiCoT, a new instruction-tuning dataset for Logical Chain-of-Thought reasoning with GPT-4. We elaborate on the process of harvesting instructions for prompting GPT-4 to generate chain-of-thought rationales. LogiCoT serves as an instruction set for teaching models of logical reasoning and elicits general reasoning skills.",
}
| Generative Pre-trained Transformer 4 (GPT-4) demonstrates impressive chain-of-thought reasoning ability. Recent work on self-instruction tuning, such as Alpaca, has focused on enhancing the general proficiency of models. These instructions enable the model to achieve performance comparable to GPT-3.5 on general tasks like open-domain text generation and paraphrasing. However, they fall short of helping the model handle complex reasoning tasks. To bridge the gap, this paper presents LogiCoT, a new instruction-tuning dataset for Logical Chain-of-Thought reasoning with GPT-4. We elaborate on the process of harvesting instructions for prompting GPT-4 to generate chain-of-thought rationales. LogiCoT serves as an instruction set for teaching models of logical reasoning and elicits general reasoning skills. | [
"Liu, Hanmeng",
"Teng, Zhiyang",
"Cui, Leyang",
"Zhang, Chaoli",
"Zhou, Qiji",
"Zhang, Yue"
] | LogiCoT: Logical Chain-of-Thought Instruction Tuning | findings-emnlp.191 | [
"https://github.com/csitfun/logicot"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.192.bib | https://aclanthology.org/2023.findings-emnlp.192/ | @inproceedings{cooper-etal-2023-hiding,
title = "Hiding in Plain Sight: Tweets with Hate Speech Masked by Homoglyphs",
author = "Cooper, Portia and
Surdeanu, Mihai and
Blanco, Eduardo",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.192",
doi = "10.18653/v1/2023.findings-emnlp.192",
pages = "2922--2929",
abstract = "To avoid detection by current NLP monitoring applications, progenitors of hate speech often replace one or more letters in offensive words with homoglyphs, visually similar Unicode characters. Harvesting real-world hate speech containing homoglyphs is challenging due to the vast replacement possibilities. We developed a character substitution scraping method and assembled the Offensive Tweets with Homoglyphs (OTH) Dataset (N=90,788) with more than 1.5 million occurrences of 1,281 non-Latin characters (emojis excluded). In an annotated sample (n=700), 40.14{\%} of the tweets were found to contain hate speech. We assessed the performance of seven transformer-based hate speech detection models and found that they performed poorly in a zero-shot setting (F1 scores between 0.04 and 0.52) but normalizing the data dramatically improved detection (F1 scores between 0.59 and 0.71). Training the models using the annotated data further boosted performance (highest micro-averaged F1 score=0.88, using five-fold cross validation). This study indicates that a dataset containing homoglyphs known and unknown to the scraping script can be collected, and that neural models can be trained to recognize camouflaged real-world hate speech.",
}
| To avoid detection by current NLP monitoring applications, progenitors of hate speech often replace one or more letters in offensive words with homoglyphs, visually similar Unicode characters. Harvesting real-world hate speech containing homoglyphs is challenging due to the vast replacement possibilities. We developed a character substitution scraping method and assembled the Offensive Tweets with Homoglyphs (OTH) Dataset (N=90,788) with more than 1.5 million occurrences of 1,281 non-Latin characters (emojis excluded). In an annotated sample (n=700), 40.14{\%} of the tweets were found to contain hate speech. We assessed the performance of seven transformer-based hate speech detection models and found that they performed poorly in a zero-shot setting (F1 scores between 0.04 and 0.52) but normalizing the data dramatically improved detection (F1 scores between 0.59 and 0.71). Training the models using the annotated data further boosted performance (highest micro-averaged F1 score=0.88, using five-fold cross validation). This study indicates that a dataset containing homoglyphs known and unknown to the scraping script can be collected, and that neural models can be trained to recognize camouflaged real-world hate speech. | [
"Cooper, Portia",
"Surdeanu, Mihai",
"Blanco, Eduardo"
] | Hiding in Plain Sight: Tweets with Hate Speech Masked by Homoglyphs | findings-emnlp.192 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.193.bib | https://aclanthology.org/2023.findings-emnlp.193/ | @inproceedings{wang-etal-2023-reducing,
title = "Reducing Spurious Correlations in Aspect-based Sentiment Analysis with Explanation from Large Language Models",
author = "Wang, Qianlong and
Ding, Keyang and
Liang, Bin and
Yang, Min and
Xu, Ruifeng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.193",
doi = "10.18653/v1/2023.findings-emnlp.193",
pages = "2930--2941",
abstract = "Recently, aspect-based sentiment analysis (ABSA) models have yielded promising results. However, they are susceptible to learning spurious correlations between certain words of the input text and output labels while modeling the sentiment feature of the aspect. This spurious correlation will potentially undermine the performance of ABSA models. One direct solution for this problem is to make the model see and learn an explanation of sentiment expression rather than certain words. Motivated by this, we exploit explanations for the sentiment polarity of each aspect from large language models (LLMs) to reduce spurious correlations in ABSA. First, we formulate a prompt template that wraps the sentence, an aspect, and the sentiment label. This template is utilized to prompt LLMs to generate an appropriate explanation that states the sentiment cause. Then, we propose two straightforward yet effective methods to leverage the explanation for preventing the learning of spurious correlations. We conducted extensive comparative experiments on five datasets by integrating them with some representative ABSA models. Results show that our methods can achieve performance gains and enhance the performance and generalization ability of ABSA models.",
}
| Recently, aspect-based sentiment analysis (ABSA) models have yielded promising results. However, they are susceptible to learning spurious correlations between certain words of the input text and output labels while modeling the sentiment feature of the aspect. This spurious correlation will potentially undermine the performance of ABSA models. One direct solution for this problem is to make the model see and learn an explanation of sentiment expression rather than certain words. Motivated by this, we exploit explanations for the sentiment polarity of each aspect from large language models (LLMs) to reduce spurious correlations in ABSA. First, we formulate a prompt template that wraps the sentence, an aspect, and the sentiment label. This template is utilized to prompt LLMs to generate an appropriate explanation that states the sentiment cause. Then, we propose two straightforward yet effective methods to leverage the explanation for preventing the learning of spurious correlations. We conducted extensive comparative experiments on five datasets by integrating them with some representative ABSA models. Results show that our methods can achieve performance gains and enhance the performance and generalization ability of ABSA models. | [
"Wang, Qianlong",
"Ding, Keyang",
"Liang, Bin",
"Yang, Min",
"Xu, Ruifeng"
] | Reducing Spurious Correlations in Aspect-based Sentiment Analysis with Explanation from Large Language Models | findings-emnlp.193 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.194.bib | https://aclanthology.org/2023.findings-emnlp.194/ | @inproceedings{furman-etal-2023-high,
title = "High-quality argumentative information in low resources approaches improve counter-narrative generation",
author = "Furman, Dami{\'a}n and
Torres, Pablo and
Rodr{\'\i}guez, Jos{\'e} and
Letzen, Diego and
Martinez, Maria and
Alemany, Laura",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.194",
doi = "10.18653/v1/2023.findings-emnlp.194",
pages = "2942--2956",
abstract = "It has been shown that high quality fine-tuning boosts the performance of language models, even if the size of the fine-tuning is small. In this work we show how highly targeted fine-tuning improves the task of hate speech counter-narrative generation in user-generated text, even for very small sizes of training (1722 counter-narratives for English and 355 for Spanish). Providing a small subset of examples focusing on single argumentative strategies, together with the argumentative analysis relevant to that strategy, yields counter-narratives that are as satisfactory as providing the whole set of counter-narratives. We also show that a good base model is required for the fine-tuning to have a positive impact. Indeed, for Spanish, the counter-narratives obtained without fine-tuning are mostly unacceptable, and, while fine-tuning improves their overall quality, the performance still remains quite unsatisfactory.",
}
| It has been shown that high quality fine-tuning boosts the performance of language models, even if the size of the fine-tuning is small. In this work we show how highly targeted fine-tuning improves the task of hate speech counter-narrative generation in user-generated text, even for very small sizes of training (1722 counter-narratives for English and 355 for Spanish). Providing a small subset of examples focusing on single argumentative strategies, together with the argumentative analysis relevant to that strategy, yields counter-narratives that are as satisfactory as providing the whole set of counter-narratives. We also show that a good base model is required for the fine-tuning to have a positive impact. Indeed, for Spanish, the counter-narratives obtained without fine-tuning are mostly unacceptable, and, while fine-tuning improves their overall quality, the performance still remains quite unsatisfactory. | [
"Furman, Dami{\\'a}n",
"Torres, Pablo",
"Rodr{\\'\\i}guez, Jos{\\'e}",
"Letzen, Diego",
"Martinez, Maria",
"Alemany, Laura"
] | High-quality argumentative information in low resources approaches improve counter-narrative generation | findings-emnlp.194 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.195.bib | https://aclanthology.org/2023.findings-emnlp.195/ | @inproceedings{lucas-etal-2023-reference,
title = "A Reference-free Segmentation Quality Index ({S}eg{R}e{F}ree)",
author = "Lucas, Evan and
Kangas, Dylan and
Havens, Timothy",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.195",
doi = "10.18653/v1/2023.findings-emnlp.195",
pages = "2957--2968",
abstract = "Topic segmentation, in the context of natural language processing, is the process of finding boundaries in a sequence of sentences that separate groups of adjacent sentences at shifts in semantic meaning. Currently, assessing the quality of a segmentation is done by comparing segmentation boundaries selected by a human or algorithm to those selected by a known good reference. This means that it is not possible to quantify the quality of a segmentation without a human annotator, which can be costly and time consuming. This work seeks to improve assessment of segmentation by proposing a reference-free segmentation quality index (SegReFree). The metric takes advantage of the fact that segmentation at a sentence level generally seeks to identify segment boundaries at semantic boundaries within the text. The proposed metric uses a modified cluster validity metric with semantic embeddings of the sentences to determine the quality of the segmentation. Multiple segmentation data sets are used to compare our proposed metric with existing reference-based segmentation metrics by progressively degrading the reference segmentation while computing all possible metrics; through this process, a strong correlation with existing segmentation metrics is shown. A Python library implementing the metric is released under the GNU General Public License and the repository is available at \url{https://github.com/evan-person/reference_free_segmentation_metric}.",
}
| Topic segmentation, in the context of natural language processing, is the process of finding boundaries in a sequence of sentences that separate groups of adjacent sentences at shifts in semantic meaning. Currently, assessing the quality of a segmentation is done by comparing segmentation boundaries selected by a human or algorithm to those selected by a known good reference. This means that it is not possible to quantify the quality of a segmentation without a human annotator, which can be costly and time consuming. This work seeks to improve assessment of segmentation by proposing a reference-free segmentation quality index (SegReFree). The metric takes advantage of the fact that segmentation at a sentence level generally seeks to identify segment boundaries at semantic boundaries within the text. The proposed metric uses a modified cluster validity metric with semantic embeddings of the sentences to determine the quality of the segmentation. Multiple segmentation data sets are used to compare our proposed metric with existing reference-based segmentation metrics by progressively degrading the reference segmentation while computing all possible metrics; through this process, a strong correlation with existing segmentation metrics is shown. A Python library implementing the metric is released under the GNU General Public License and the repository is available at \url{https://github.com/evan-person/reference_free_segmentation_metric}. | [
"Lucas, Evan",
"Kangas, Dylan",
"Havens, Timothy"
] | A Reference-free Segmentation Quality Index (SegReFree) | findings-emnlp.195 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.196.bib | https://aclanthology.org/2023.findings-emnlp.196/ | @inproceedings{cai-etal-2023-context,
title = "In-context Learning for Few-shot Multimodal Named Entity Recognition",
author = "Cai, Chenran and
Wang, Qianlong and
Liang, Bin and
Qin, Bing and
Yang, Min and
Wong, Kam-Fai and
Xu, Ruifeng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.196",
doi = "10.18653/v1/2023.findings-emnlp.196",
pages = "2969--2979",
abstract = "Thanks in part to the availability of copious annotated resources for some entity categories, existing studies have achieved superior performance in multimodal named entity recognition (MNER). However, in the real-world scenario, it is infeasible to enumerate all entity categories in advance. Therefore, in this paper, we formulate a new few-shot multimodal named entity recognition (FewMNER) task, which aims to effectively locate and identify named entities for a text-image pair only using a small number of labeled examples. Further, we explore the merit of in-context learning (ICL) and propose a novel framework to deal with FewMNER, where three points are taken into account: i.e., converting visual modality, selecting useful examples, and designing an effective task demonstration. Specifically, we first employ an image caption model to convert images into textual descriptions, enabling large language models to absorb information from visual modality. Then, we use the ranking of the sum of similarity rankings from both text and image modalities to select k-nearest examples, which form a demonstration context. Finally, we utilize the MNER definition and the meaning of each entity category as effective instruction. Extensive experimental results demonstrate that our framework outperforms baselines under several few-shot settings.",
}
| Thanks in part to the availability of copious annotated resources for some entity categories, existing studies have achieved superior performance in multimodal named entity recognition (MNER). However, in the real-world scenario, it is infeasible to enumerate all entity categories in advance. Therefore, in this paper, we formulate a new few-shot multimodal named entity recognition (FewMNER) task, which aims to effectively locate and identify named entities for a text-image pair only using a small number of labeled examples. Further, we explore the merit of in-context learning (ICL) and propose a novel framework to deal with FewMNER, where three points are taken into account: i.e., converting visual modality, selecting useful examples, and designing an effective task demonstration. Specifically, we first employ an image caption model to convert images into textual descriptions, enabling large language models to absorb information from visual modality. Then, we use the ranking of the sum of similarity rankings from both text and image modalities to select k-nearest examples, which form a demonstration context. Finally, we utilize the MNER definition and the meaning of each entity category as effective instruction. Extensive experimental results demonstrate that our framework outperforms baselines under several few-shot settings. | [
"Cai, Chenran",
"Wang, Qianlong",
"Liang, Bin",
"Qin, Bing",
"Yang, Min",
"Wong, Kam-Fai",
"Xu, Ruifeng"
] | In-context Learning for Few-shot Multimodal Named Entity Recognition | findings-emnlp.196 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.197.bib | https://aclanthology.org/2023.findings-emnlp.197/ | @inproceedings{zablotskaia-etal-2023-uncertainty,
title = "On Uncertainty Calibration and Selective Generation in Probabilistic Neural Summarization: A Benchmark Study",
author = "Zablotskaia, Polina and
Phan, Du and
Maynez, Joshua and
Narayan, Shashi and
Ren, Jie and
Liu, Jeremiah",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.197",
doi = "10.18653/v1/2023.findings-emnlp.197",
pages = "2980--2992",
abstract = "Modern deep models for summarization attains impressive benchmark performance, but they are prone to generating miscalibrated predictive uncertainty. This means that they assign high confidence to low-quality predictions, leading to compromised reliability and trustworthiness in real-world applications. Probabilistic deep learning methods are common solutions to the miscalibration problem. However, their relative effectiveness in complex autoregressive summarization tasks are not well-understood. In this work, we thoroughly investigate different state-of-the-art probabilistic methods{'} effectiveness in improving the uncertainty quality of the neural summarization models, across three large-scale benchmarks with varying difficulty using our newly introduced evaluation protocol. We show that the probabilistic methods consistently improve the model{'}s generation and uncertainty quality, leading to improved selective generation performance (i.e., abstaining from low-quality summaries) in practice. We also reveal notable failure patterns of probabilistic methods widely-adopted in NLP community (e.g., Deep Ensemble and Monte Carlo Dropout), cautioning the importance of choosing appropriate method for the data setting.",
}
| Modern deep models for summarization attains impressive benchmark performance, but they are prone to generating miscalibrated predictive uncertainty. This means that they assign high confidence to low-quality predictions, leading to compromised reliability and trustworthiness in real-world applications. Probabilistic deep learning methods are common solutions to the miscalibration problem. However, their relative effectiveness in complex autoregressive summarization tasks are not well-understood. In this work, we thoroughly investigate different state-of-the-art probabilistic methods{'} effectiveness in improving the uncertainty quality of the neural summarization models, across three large-scale benchmarks with varying difficulty using our newly introduced evaluation protocol. We show that the probabilistic methods consistently improve the model{'}s generation and uncertainty quality, leading to improved selective generation performance (i.e., abstaining from low-quality summaries) in practice. We also reveal notable failure patterns of probabilistic methods widely-adopted in NLP community (e.g., Deep Ensemble and Monte Carlo Dropout), cautioning the importance of choosing appropriate method for the data setting. | [
"Zablotskaia, Polina",
"Phan, Du",
"Maynez, Joshua",
"Narayan, Shashi",
"Ren, Jie",
"Liu, Jeremiah"
] | On Uncertainty Calibration and Selective Generation in Probabilistic Neural Summarization: A Benchmark Study | findings-emnlp.197 | 2304.08653 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.198.bib | https://aclanthology.org/2023.findings-emnlp.198/ | @inproceedings{zhang-duh-2023-handshape,
title = "Handshape-Aware Sign Language Recognition: Extended Datasets and Exploration of Handshape-Inclusive Methods",
author = "Zhang, Xuan and
Duh, Kevin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.198",
doi = "10.18653/v1/2023.findings-emnlp.198",
pages = "2993--3002",
abstract = "The majority of existing work on sign language recognition encodes signed videos without explicitly acknowledging the phonological attributes of signs. Given that handshape is a vital parameter in sign languages, we explore the potential of handshape-aware sign language recognition. We augment the PHOENIX14T dataset with gloss-level handshape labels, resulting in the new PHOENIX14T-HS dataset. Two unique methods are proposed for handshape-inclusive sign language recognition: a single-encoder network and a dual-encoder network, complemented by a training strategy that simultaneously optimizes both the CTC loss and frame-level cross-entropy loss. The proposed methodology consistently outperforms the baseline performance. The dataset and code can be accessed at: www.anonymous.com.",
}
| The majority of existing work on sign language recognition encodes signed videos without explicitly acknowledging the phonological attributes of signs. Given that handshape is a vital parameter in sign languages, we explore the potential of handshape-aware sign language recognition. We augment the PHOENIX14T dataset with gloss-level handshape labels, resulting in the new PHOENIX14T-HS dataset. Two unique methods are proposed for handshape-inclusive sign language recognition: a single-encoder network and a dual-encoder network, complemented by a training strategy that simultaneously optimizes both the CTC loss and frame-level cross-entropy loss. The proposed methodology consistently outperforms the baseline performance. The dataset and code can be accessed at: www.anonymous.com. | [
"Zhang, Xuan",
"Duh, Kevin"
] | Handshape-Aware Sign Language Recognition: Extended Datasets and Exploration of Handshape-Inclusive Methods | findings-emnlp.198 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.199.bib | https://aclanthology.org/2023.findings-emnlp.199/ | @inproceedings{choi-etal-2023-simckp,
title = "{S}im{CKP}: Simple Contrastive Learning of Keyphrase Representations",
author = "Choi, Minseok and
Gwak, Chaeheon and
Kim, Seho and
Kim, Si and
Choo, Jaegul",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.199",
doi = "10.18653/v1/2023.findings-emnlp.199",
pages = "3003--3015",
abstract = "Keyphrase generation (KG) aims to generate a set of summarizing words or phrases given a source document, while keyphrase extraction (KE) aims to identify them from the text. Because the search space is much smaller in KE, it is often combined with KG to predict keyphrases that may or may not exist in the corresponding document. However, current unified approaches adopt sequence labeling and maximization-based generation that primarily operate at a token level, falling short in observing and scoring keyphrases as a whole. In this work, we propose SimCKP, a simple contrastive learning framework that consists of two stages: 1) An extractor-generator that extracts keyphrases by learning context-aware phrase-level representations in a contrastive manner while also generating keyphrases that do not appear in the document; 2) A reranker that adapts scores for each generated phrase by likewise aligning their representations with the corresponding document. Experimental results on multiple benchmark datasets demonstrate the effectiveness of our proposed approach, which outperforms the state-of-the-art models by a significant margin.",
}
| Keyphrase generation (KG) aims to generate a set of summarizing words or phrases given a source document, while keyphrase extraction (KE) aims to identify them from the text. Because the search space is much smaller in KE, it is often combined with KG to predict keyphrases that may or may not exist in the corresponding document. However, current unified approaches adopt sequence labeling and maximization-based generation that primarily operate at a token level, falling short in observing and scoring keyphrases as a whole. In this work, we propose SimCKP, a simple contrastive learning framework that consists of two stages: 1) An extractor-generator that extracts keyphrases by learning context-aware phrase-level representations in a contrastive manner while also generating keyphrases that do not appear in the document; 2) A reranker that adapts scores for each generated phrase by likewise aligning their representations with the corresponding document. Experimental results on multiple benchmark datasets demonstrate the effectiveness of our proposed approach, which outperforms the state-of-the-art models by a significant margin. | [
"Choi, Minseok",
"Gwak, Chaeheon",
"Kim, Seho",
"Kim, Si",
"Choo, Jaegul"
] | SimCKP: Simple Contrastive Learning of Keyphrase Representations | findings-emnlp.199 | 2310.08221 | [
"https://github.com/brightjade/SimCKP"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.200.bib | https://aclanthology.org/2023.findings-emnlp.200/ | @inproceedings{niklaus-etal-2023-lextreme,
title = "{LEXTREME}: A Multi-Lingual and Multi-Task Benchmark for the Legal Domain",
author = {Niklaus, Joel and
Matoshi, Veton and
Rani, Pooja and
Galassi, Andrea and
St{\"u}rmer, Matthias and
Chalkidis, Ilias},
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.200",
doi = "10.18653/v1/2023.findings-emnlp.200",
pages = "3016--3054",
abstract = "Lately, propelled by phenomenal advances around the transformer architecture, the legal NLP field has enjoyed spectacular growth. To measure progress, well-curated and challenging benchmarks are crucial. Previous efforts have produced numerous benchmarks for general NLP models, typically based on news or Wikipedia. However, these may not fit specific domains such as law, with its unique lexicons and intricate sentence structures. Even though there is a rising need to build NLP systems for languages other than English, many benchmarks are available only in English and no multilingual benchmark exists in the legal NLP field. We survey the legal NLP literature and select 11 datasets covering 24 languages, creating LEXTREME. To fairly compare models, we propose two aggregate scores, i.e., dataset aggregate score and language aggregate score. Our results show that even the best baseline only achieves modest results, and also ChatGPT struggles with many tasks. This indicates that LEXTREME remains a challenging task with ample room for improvement. To facilitate easy use for researchers and practitioners, we release LEXTREME on huggingface along with a public leaderboard and the necessary code to evaluate models. We also provide a public Weights and Biases project containing all runs for transparency.",
}
| Lately, propelled by phenomenal advances around the transformer architecture, the legal NLP field has enjoyed spectacular growth. To measure progress, well-curated and challenging benchmarks are crucial. Previous efforts have produced numerous benchmarks for general NLP models, typically based on news or Wikipedia. However, these may not fit specific domains such as law, with its unique lexicons and intricate sentence structures. Even though there is a rising need to build NLP systems for languages other than English, many benchmarks are available only in English and no multilingual benchmark exists in the legal NLP field. We survey the legal NLP literature and select 11 datasets covering 24 languages, creating LEXTREME. To fairly compare models, we propose two aggregate scores, i.e., dataset aggregate score and language aggregate score. Our results show that even the best baseline only achieves modest results, and also ChatGPT struggles with many tasks. This indicates that LEXTREME remains a challenging task with ample room for improvement. To facilitate easy use for researchers and practitioners, we release LEXTREME on huggingface along with a public leaderboard and the necessary code to evaluate models. We also provide a public Weights and Biases project containing all runs for transparency. | [
"Niklaus, Joel",
"Matoshi, Veton",
"Rani, Pooja",
"Galassi, Andrea",
"St{\\\"u}rmer, Matthias",
"Chalkidis, Ilias"
] | LEXTREME: A Multi-Lingual and Multi-Task Benchmark for the Legal Domain | findings-emnlp.200 | 2301.13126 | [
"https://github.com/joelniklaus/lextreme"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.201.bib | https://aclanthology.org/2023.findings-emnlp.201/ | @inproceedings{yen-hsu-2023-three,
title = "Three Questions Concerning the Use of Large Language Models to Facilitate Mathematics Learning",
author = "Yen, An-Zi and
Hsu, Wei-Ling",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.201",
doi = "10.18653/v1/2023.findings-emnlp.201",
pages = "3055--3069",
abstract = "Due to the remarkable language understanding and generation abilities of large language models (LLMs), their use in educational applications has been explored. However, little work has been done on investigating the pedagogical ability of LLMs in helping students to learn mathematics. In this position paper, we discuss the challenges associated with employing LLMs to enhance students{'} mathematical problem-solving skills by providing adaptive feedback. Apart from generating the wrong reasoning processes, LLMs can misinterpret the meaning of the question, and also exhibit difficulty in understanding the given questions{'} rationales when attempting to correct students{'} answers. Three research questions are formulated.",
}
| Due to the remarkable language understanding and generation abilities of large language models (LLMs), their use in educational applications has been explored. However, little work has been done on investigating the pedagogical ability of LLMs in helping students to learn mathematics. In this position paper, we discuss the challenges associated with employing LLMs to enhance students{'} mathematical problem-solving skills by providing adaptive feedback. Apart from generating the wrong reasoning processes, LLMs can misinterpret the meaning of the question, and also exhibit difficulty in understanding the given questions{'} rationales when attempting to correct students{'} answers. Three research questions are formulated. | [
"Yen, An-Zi",
"Hsu, Wei-Ling"
] | Three Questions Concerning the Use of Large Language Models to Facilitate Mathematics Learning | findings-emnlp.201 | 2310.13615 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.202.bib | https://aclanthology.org/2023.findings-emnlp.202/ | @inproceedings{guo-etal-2023-simultaneous,
title = "Simultaneous Machine Translation with Tailored Reference",
author = "Guo, Shoutao and
Zhang, Shaolei and
Feng, Yang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.202",
doi = "10.18653/v1/2023.findings-emnlp.202",
pages = "3070--3084",
abstract = "Simultaneous machine translation (SiMT) generates translation while reading the whole source sentence. However, existing SiMT models are typically trained using the same reference disregarding the varying amounts of available source information at different latency. Training the model with ground-truth at low latency may introduce forced anticipations, whereas utilizing reference consistent with the source word order at high latency results in performance degradation. Consequently, it is crucial to train the SiMT model with appropriate reference that avoids forced anticipations during training while maintaining high quality. In this paper, we propose a novel method that provides tailored reference for the SiMT models trained at different latency by rephrasing the ground-truth. Specifically, we introduce the tailor, induced by reinforcement learning, to modify ground-truth to the tailored reference. The SiMT model is trained with the tailored reference and jointly optimized with the tailor to enhance performance. Importantly, our method is applicable to a wide range of current SiMT approaches. Experiments on three translation tasks demonstrate that our method achieves state-of-the-art performance in both fixed and adaptive policies.",
}
| Simultaneous machine translation (SiMT) generates translation while reading the whole source sentence. However, existing SiMT models are typically trained using the same reference disregarding the varying amounts of available source information at different latency. Training the model with ground-truth at low latency may introduce forced anticipations, whereas utilizing reference consistent with the source word order at high latency results in performance degradation. Consequently, it is crucial to train the SiMT model with appropriate reference that avoids forced anticipations during training while maintaining high quality. In this paper, we propose a novel method that provides tailored reference for the SiMT models trained at different latency by rephrasing the ground-truth. Specifically, we introduce the tailor, induced by reinforcement learning, to modify ground-truth to the tailored reference. The SiMT model is trained with the tailored reference and jointly optimized with the tailor to enhance performance. Importantly, our method is applicable to a wide range of current SiMT approaches. Experiments on three translation tasks demonstrate that our method achieves state-of-the-art performance in both fixed and adaptive policies. | [
"Guo, Shoutao",
"Zhang, Shaolei",
"Feng, Yang"
] | Simultaneous Machine Translation with Tailored Reference | findings-emnlp.202 | 2310.13588 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.203.bib | https://aclanthology.org/2023.findings-emnlp.203/ | @inproceedings{xue-etal-2023-dynamic,
title = "Dynamic Voting for Efficient Reasoning in Large Language Models",
author = "Xue, Mingfeng and
Liu, Dayiheng and
Lei, Wenqiang and
Ren, Xingzhang and
Yang, Baosong and
Xie, Jun and
Zhang, Yidan and
Peng, Dezhong and
Lv, Jiancheng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.203",
doi = "10.18653/v1/2023.findings-emnlp.203",
pages = "3085--3104",
abstract = "Multi-path voting methods like Self-consistency have been used to mitigate reasoning errors in large language models caused by factual errors and illusion generation. However, these methods require excessive computing resources as they generate numerous reasoning paths for each problem. And our experiments show that on the arithmetic reasoning task, SVAMP, half of the problems fail to obtain noticeable accuracy gains when voting with more than three paths. In this paper, we propose a novel multi-path voting technique called Dynamic Voting, which effectively reduces the number of reasoning paths during multi-path voting while preserving accuracies by applying early exiting for problems that large language models can confidently solve. Experimental evaluations on arithmetic, commonsense, and symbolic reasoning tasks under few-shot and zero-shot settings demonstrate that Dynamic Voting achieves comparable accuracies employing significantly fewer reasoning paths. Notably, one of our Dynamic Voting strategies outperforms Self-consistency using only 24.7{\%} of the number of paths on the LetterConcat task in the few-shot setting. Furthermore, Dynamic Voting showcases strong robustness in threshold selection. It also demonstrates excellent generalizability when combined with other voting techniques, different models, and diverse prompts.",
}
| Multi-path voting methods like Self-consistency have been used to mitigate reasoning errors in large language models caused by factual errors and illusion generation. However, these methods require excessive computing resources as they generate numerous reasoning paths for each problem. And our experiments show that on the arithmetic reasoning task, SVAMP, half of the problems fail to obtain noticeable accuracy gains when voting with more than three paths. In this paper, we propose a novel multi-path voting technique called Dynamic Voting, which effectively reduces the number of reasoning paths during multi-path voting while preserving accuracies by applying early exiting for problems that large language models can confidently solve. Experimental evaluations on arithmetic, commonsense, and symbolic reasoning tasks under few-shot and zero-shot settings demonstrate that Dynamic Voting achieves comparable accuracies employing significantly fewer reasoning paths. Notably, one of our Dynamic Voting strategies outperforms Self-consistency using only 24.7{\%} of the number of paths on the LetterConcat task in the few-shot setting. Furthermore, Dynamic Voting showcases strong robustness in threshold selection. It also demonstrates excellent generalizability when combined with other voting techniques, different models, and diverse prompts. | [
"Xue, Mingfeng",
"Liu, Dayiheng",
"Lei, Wenqiang",
"Ren, Xingzhang",
"Yang, Baosong",
"Xie, Jun",
"Zhang, Yidan",
"Peng, Dezhong",
"Lv, Jiancheng"
] | Dynamic Voting for Efficient Reasoning in Large Language Models | findings-emnlp.203 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.204.bib | https://aclanthology.org/2023.findings-emnlp.204/ | @inproceedings{lodha-etal-2023-surgical,
title = "On Surgical Fine-tuning for Language Encoders",
author = "Lodha, Abhilasha and
Belapurkar, Gayatri and
Chalkapurkar, Saloni and
Tao, Yuanming and
Ghosh, Reshmi and
Basu, Samyadeep and
Petrov, Dmitrii and
Srinivasan, Soundararajan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.204",
doi = "10.18653/v1/2023.findings-emnlp.204",
pages = "3105--3113",
abstract = "Fine-tuning all the layers of a pre-trained neural language encoder (either using all the parameters or using parameter-efficient methods) is often the de-facto way of adapting it to a new task. We show evidence that for different downstream language tasks, fine-tuning only a subset of layers is sufficient to obtain performance that is close to and often better than fine-tuning all the layers in the language encoder. We propose an efficient metric based on the diagonal of the Fisher information matrix (FIM score), to select the candidate layers for selective fine-tuning. We show, empirically on GLUE and SuperGLUE tasks and across distinct language encoders, that this metric can effectively select layers leading to a strong downstream performance. Our work highlights that task-specific information corresponding to a given downstream task is often localized within a few layers, and tuning only those is sufficient for strong performance. Additionally, we demonstrate the robustness of the FIM score to rank layers in a manner that remains constant during the optimization process.",
}
| Fine-tuning all the layers of a pre-trained neural language encoder (either using all the parameters or using parameter-efficient methods) is often the de-facto way of adapting it to a new task. We show evidence that for different downstream language tasks, fine-tuning only a subset of layers is sufficient to obtain performance that is close to and often better than fine-tuning all the layers in the language encoder. We propose an efficient metric based on the diagonal of the Fisher information matrix (FIM score), to select the candidate layers for selective fine-tuning. We show, empirically on GLUE and SuperGLUE tasks and across distinct language encoders, that this metric can effectively select layers leading to a strong downstream performance. Our work highlights that task-specific information corresponding to a given downstream task is often localized within a few layers, and tuning only those is sufficient for strong performance. Additionally, we demonstrate the robustness of the FIM score to rank layers in a manner that remains constant during the optimization process. | [
"Lodha, Abhilasha",
"Belapurkar, Gayatri",
"Chalkapurkar, Saloni",
"Tao, Yuanming",
"Ghosh, Reshmi",
"Basu, Samyadeep",
"Petrov, Dmitrii",
"Srinivasan, Soundararajan"
] | On Surgical Fine-tuning for Language Encoders | findings-emnlp.204 | 2310.17041 | [
"https://github.com/ymtao5219/surgical_fine_tuning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.205.bib | https://aclanthology.org/2023.findings-emnlp.205/ | @inproceedings{ouyang-li-2023-autoplan,
title = "{A}uto{P}lan: Automatic Planning of Interactive Decision-Making Tasks With Large Language Models",
author = "Ouyang, Siqi and
Li, Lei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.205",
doi = "10.18653/v1/2023.findings-emnlp.205",
pages = "3114--3128",
abstract = "Recent large language models (LLMs) are promising for making decisions in grounded environments. However, LLMs frequently fail in complex decision-making tasks due to the misalignment between the pre-trained knowledge in LLMs and the actual rules in the environment. Existing methods require either costly gradient computation or lengthy in-context demonstrations. In this paper, we propose AutoPlan, an approach to guide LLM-based agents to accomplish interactive decision-making tasks. AutoPlan augments the LLM prompt with a task-solving plan and optimizes it through iterative experience collection and reflection. Our experiments show that AutoPlan, though using no in-context demonstrations, achieves success rates on par with the baselines using human-written demonstrations on ALFWorld and even outperforms them by 8{\%} on HotpotQA. The code is available at https://github.com/owaski/AutoPlan.",
}
| Recent large language models (LLMs) are promising for making decisions in grounded environments. However, LLMs frequently fail in complex decision-making tasks due to the misalignment between the pre-trained knowledge in LLMs and the actual rules in the environment. Existing methods require either costly gradient computation or lengthy in-context demonstrations. In this paper, we propose AutoPlan, an approach to guide LLM-based agents to accomplish interactive decision-making tasks. AutoPlan augments the LLM prompt with a task-solving plan and optimizes it through iterative experience collection and reflection. Our experiments show that AutoPlan, though using no in-context demonstrations, achieves success rates on par with the baselines using human-written demonstrations on ALFWorld and even outperforms them by 8{\%} on HotpotQA. The code is available at https://github.com/owaski/AutoPlan. | [
"Ouyang, Siqi",
"Li, Lei"
] | AutoPlan: Automatic Planning of Interactive Decision-Making Tasks With Large Language Models | findings-emnlp.205 | 2305.15064 | [
"https://github.com/owaski/autoplan"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.206.bib | https://aclanthology.org/2023.findings-emnlp.206/ | @inproceedings{reich-etal-2023-measuring,
title = "Measuring Faithful and Plausible Visual Grounding in {VQA}",
author = "Reich, Daniel and
Putze, Felix and
Schultz, Tanja",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.206",
doi = "10.18653/v1/2023.findings-emnlp.206",
pages = "3129--3144",
abstract = "Metrics for Visual Grounding (VG) in Visual Question Answering (VQA) systems primarily aim to measure a system{'}s reliance on relevant parts of the image when inferring an answer to the given question. Lack of VG has been a common problem among state-of-the-art VQA systems and can manifest in over-reliance on irrelevant image parts or a disregard for the visual modality entirely. Although inference capabilities of VQA models are often illustrated by a few qualitative illustrations, most systems are not quantitatively assessed for their VG properties. We believe, an easily calculated criterion for meaningfully measuring a system{'}s VG can help remedy this shortcoming, as well as add another valuable dimension to model evaluations and analysis. To this end, we propose a new VG metric that captures if a model a) identifies question-relevant objects in the scene, and b) actually relies on the information contained in the relevant objects when producing its answer, i.e., if its visual grounding is both {``}faithful{''} and {``}plausible{''}. Our metric, called Faithful {\&} Plausible Visual Grounding (FPVG), is straightforward to determine for most VQA model designs. We give a detailed description of FPVG and evaluate several reference systems spanning various VQA architectures. Code to support the metric calculations on the GQA data set is available on GitHub.",
}
| Metrics for Visual Grounding (VG) in Visual Question Answering (VQA) systems primarily aim to measure a system{'}s reliance on relevant parts of the image when inferring an answer to the given question. Lack of VG has been a common problem among state-of-the-art VQA systems and can manifest in over-reliance on irrelevant image parts or a disregard for the visual modality entirely. Although inference capabilities of VQA models are often illustrated by a few qualitative illustrations, most systems are not quantitatively assessed for their VG properties. We believe, an easily calculated criterion for meaningfully measuring a system{'}s VG can help remedy this shortcoming, as well as add another valuable dimension to model evaluations and analysis. To this end, we propose a new VG metric that captures if a model a) identifies question-relevant objects in the scene, and b) actually relies on the information contained in the relevant objects when producing its answer, i.e., if its visual grounding is both {``}faithful{''} and {``}plausible{''}. Our metric, called Faithful {\&} Plausible Visual Grounding (FPVG), is straightforward to determine for most VQA model designs. We give a detailed description of FPVG and evaluate several reference systems spanning various VQA architectures. Code to support the metric calculations on the GQA data set is available on GitHub. | [
"Reich, Daniel",
"Putze, Felix",
"Schultz, Tanja"
] | Measuring Faithful and Plausible Visual Grounding in VQA | findings-emnlp.206 | 2305.15015 | [
"https://github.com/dreichcsl/fpvg"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.207.bib | https://aclanthology.org/2023.findings-emnlp.207/ | @inproceedings{cho-etal-2023-improving,
title = "Improving Zero-shot Reader by Reducing Distractions from Irrelevant Documents in Open-Domain Question Answering",
author = "Cho, Sukmin and
Seo, Jeongyeon and
Jeong, Soyeong and
Park, Jong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.207",
doi = "10.18653/v1/2023.findings-emnlp.207",
pages = "3145--3157",
abstract = "Large language models (LLMs) enable zero-shot approaches in open-domain question answering (ODQA), yet with limited advancements as the reader is compared to the retriever. This study aims at the feasibility of a zero-shot reader that addresses the challenges of computational cost and the need for labeled data. We find that LLMs are distracted due to irrelevant documents in the retrieved set and the overconfidence of the generated answers when they are exploited as zero-shot readers. To tackle these problems, we mitigate the impact of such documents via Distraction-aware Answer Selection (DAS) with a negation-based instruction and score adjustment for proper answer selection. Experimental results show that our approach successfully handles distraction across diverse scenarios, enhancing the performance of zero-shot readers. Furthermore, unlike supervised readers struggling with unseen data, zero-shot readers demonstrate outstanding transferability without any training.",
}
| Large language models (LLMs) enable zero-shot approaches in open-domain question answering (ODQA), yet with limited advancements as the reader is compared to the retriever. This study aims at the feasibility of a zero-shot reader that addresses the challenges of computational cost and the need for labeled data. We find that LLMs are distracted due to irrelevant documents in the retrieved set and the overconfidence of the generated answers when they are exploited as zero-shot readers. To tackle these problems, we mitigate the impact of such documents via Distraction-aware Answer Selection (DAS) with a negation-based instruction and score adjustment for proper answer selection. Experimental results show that our approach successfully handles distraction across diverse scenarios, enhancing the performance of zero-shot readers. Furthermore, unlike supervised readers struggling with unseen data, zero-shot readers demonstrate outstanding transferability without any training. | [
"Cho, Sukmin",
"Seo, Jeongyeon",
"Jeong, Soyeong",
"Park, Jong"
] | Improving Zero-shot Reader by Reducing Distractions from Irrelevant Documents in Open-Domain Question Answering | findings-emnlp.207 | 2310.17490 | [
""
] | https://huggingface.co/papers/2310.17490 | 0 | 0 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.208.bib | https://aclanthology.org/2023.findings-emnlp.208/ | @inproceedings{jain-etal-2023-summarize,
title = "Can you Summarize my learnings? Towards Perspective-based Educational Dialogue Summarization",
author = "Jain, Raghav and
Saha, Tulika and
Lalwani, Jhagrut and
Saha, Sriparna",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.208",
doi = "10.18653/v1/2023.findings-emnlp.208",
pages = "3158--3173",
abstract = "The steady increase in the utilization of Virtual Tutors (VT) over recent years has allowed for a more efficient, personalized, and interactive AI-based learning experiences. A vital aspect in these educational chatbots is summarizing the conversations between the VT and the students, as it is critical in consolidating learning points and monitoring progress. However, the approach to summarization should be tailored according to the perspective. Summarization from the VTs perspective should emphasize on its teaching efficiency and potential improvements. Conversely, student-oriented summaries should distill learning points, track progress, and suggest scope for improvements. Based on this hypothesis, in this work, we propose a new task of Multi-modal Perspective based Dialogue Summarization (MM-PerSumm), demonstrated in an educational setting. Towards this aim, we introduce a novel dataset, CIMA-Summ that summarizes educational dialogues from three unique perspectives: the Student, the Tutor, and a Generic viewpoint. In addition, we propose an Image and Perspective-guided Dialogue Summarization (IP-Summ) model which is a Seq2Seq language model incorporating (i) multi-modal learning from images and (ii) a perspective-based encoder that constructs a dialogue graph capturing the intentions and actions of both the VT and the student, enabling the summarization of a dialogue from diverse perspectives. Lastly, we conduct detailed analyses of our model{'}s performance, highlighting the aspects that could lead to optimal modeling of IP-Summ.",
}
| The steady increase in the utilization of Virtual Tutors (VT) over recent years has allowed for a more efficient, personalized, and interactive AI-based learning experiences. A vital aspect in these educational chatbots is summarizing the conversations between the VT and the students, as it is critical in consolidating learning points and monitoring progress. However, the approach to summarization should be tailored according to the perspective. Summarization from the VTs perspective should emphasize on its teaching efficiency and potential improvements. Conversely, student-oriented summaries should distill learning points, track progress, and suggest scope for improvements. Based on this hypothesis, in this work, we propose a new task of Multi-modal Perspective based Dialogue Summarization (MM-PerSumm), demonstrated in an educational setting. Towards this aim, we introduce a novel dataset, CIMA-Summ that summarizes educational dialogues from three unique perspectives: the Student, the Tutor, and a Generic viewpoint. In addition, we propose an Image and Perspective-guided Dialogue Summarization (IP-Summ) model which is a Seq2Seq language model incorporating (i) multi-modal learning from images and (ii) a perspective-based encoder that constructs a dialogue graph capturing the intentions and actions of both the VT and the student, enabling the summarization of a dialogue from diverse perspectives. Lastly, we conduct detailed analyses of our model{'}s performance, highlighting the aspects that could lead to optimal modeling of IP-Summ. | [
"Jain, Raghav",
"Saha, Tulika",
"Lalwani, Jhagrut",
"Saha, Sriparna"
] | Can you Summarize my learnings? Towards Perspective-based Educational Dialogue Summarization | findings-emnlp.208 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.209.bib | https://aclanthology.org/2023.findings-emnlp.209/ | @inproceedings{cheng-etal-2023-adaptive,
title = "Adaptive Textual Label Noise Learning based on Pre-trained Models",
author = "Cheng, Shaohuan and
Chen, Wenyu and
Mingsheng, Fu and
Xie, Xuanting and
Qu, Hong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.209",
doi = "10.18653/v1/2023.findings-emnlp.209",
pages = "3174--3188",
abstract = "The label noise in real-world scenarios is unpredictable and can even be a mixture of different types of noise. To meet this challenge, we develop an adaptive textual label noise learning framework based on pre-trained models, which consists of an adaptive warm-up stage and a hybrid training stage. Specifically, an early stopping method, relying solely on the training set, is designed to dynamically terminate the warm-up process based on the model{'}s fit level to different noise scenarios. The hybrid training stage incorporates several generalization strategies to gradually correct mislabeled instances, thereby making better use of noisy data. Experiments on multiple datasets demonstrate that our approach performs comparably or even surpasses the state-of-the-art methods in various noise scenarios, including scenarios with the mixture of multiple types of noise.",
}
| The label noise in real-world scenarios is unpredictable and can even be a mixture of different types of noise. To meet this challenge, we develop an adaptive textual label noise learning framework based on pre-trained models, which consists of an adaptive warm-up stage and a hybrid training stage. Specifically, an early stopping method, relying solely on the training set, is designed to dynamically terminate the warm-up process based on the model{'}s fit level to different noise scenarios. The hybrid training stage incorporates several generalization strategies to gradually correct mislabeled instances, thereby making better use of noisy data. Experiments on multiple datasets demonstrate that our approach performs comparably or even surpasses the state-of-the-art methods in various noise scenarios, including scenarios with the mixture of multiple types of noise. | [
"Cheng, Shaohuan",
"Chen, Wenyu",
"Mingsheng, Fu",
"Xie, Xuanting",
"Qu, Hong"
] | Adaptive Textual Label Noise Learning based on Pre-trained Models | findings-emnlp.209 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.210.bib | https://aclanthology.org/2023.findings-emnlp.210/ | @inproceedings{ren-etal-2023-towards,
title = "Towards Informative Open-ended Text Generation with Dynamic Knowledge Triples",
author = "Ren, Zixuan and
Zhao, Yang and
Zong, Chengqing",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.210",
doi = "10.18653/v1/2023.findings-emnlp.210",
pages = "3189--3203",
abstract = "Pretrained language models (PLMs), especially large language models (LLMs) demonstrate impressive capabilities in open-ended text generation. While our statistical results show that LLMs often suffer from over-concentrated information, where the generated texts overly focus on the given prompt and fail to provide sufficient background and detailed information as humans do. To address this issue, we propose a dynamic knowledge-guided informative open-ended text generation approach, that utilizes a knowledge graph to help the model generate more contextually related entities and detailed facts. Specifically, we first employ a local knowledge filter to extract relevant knowledge from the comprehensive knowledge graph for a given topic sentence. Then we introduce a dynamic knowledge selector to predict the entity to be mentioned in the subsequent sentence. Finally, we utilize a knowledge-enhanced text generator to produce a more informative output. To evaluate the effectiveness of our approach, we evaluate the proposed approach in two scenarios: fine-tuning for small PLMs and prompt tuning for LLMs. Experimental results show that our approach could generate more informative texts than baselines.",
}
| Pretrained language models (PLMs), especially large language models (LLMs) demonstrate impressive capabilities in open-ended text generation. While our statistical results show that LLMs often suffer from over-concentrated information, where the generated texts overly focus on the given prompt and fail to provide sufficient background and detailed information as humans do. To address this issue, we propose a dynamic knowledge-guided informative open-ended text generation approach, that utilizes a knowledge graph to help the model generate more contextually related entities and detailed facts. Specifically, we first employ a local knowledge filter to extract relevant knowledge from the comprehensive knowledge graph for a given topic sentence. Then we introduce a dynamic knowledge selector to predict the entity to be mentioned in the subsequent sentence. Finally, we utilize a knowledge-enhanced text generator to produce a more informative output. To evaluate the effectiveness of our approach, we evaluate the proposed approach in two scenarios: fine-tuning for small PLMs and prompt tuning for LLMs. Experimental results show that our approach could generate more informative texts than baselines. | [
"Ren, Zixuan",
"Zhao, Yang",
"Zong, Chengqing"
] | Towards Informative Open-ended Text Generation with Dynamic Knowledge Triples | findings-emnlp.210 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.211.bib | https://aclanthology.org/2023.findings-emnlp.211/ | @inproceedings{liu-etal-2023-novel,
title = "Novel Relation Detection: Discovering Unknown Relation Types via Multi-Strategy Self-Supervised Learning",
author = "Liu, Qingbin and
Kung, Yin and
Hao, Yanchao and
Sui, Dianbo and
Cheng, Siyuan and
Chen, Xi and
Zhang, Ningyu and
Chen, Jiaoyan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.211",
doi = "10.18653/v1/2023.findings-emnlp.211",
pages = "3204--3214",
abstract = "Conventional approaches to relation extraction can only recognize predefined relation types. In the real world, new or out-of-scope relation types may keep challenging the deployed models. In this paper, we formalize such a challenging problem as Novel Relation Detection (NRD), which aims to discover potential new relation types based on training samples of known relations. To this end, we construct two NRD datasets and exhaustively investigate a variety of out-of-scope detection methods. We further propose an effective NRD method that utilizes multi-strategy self-supervised learning to handle the problem of shallow semantic similarity in the NRD task. Experimental results demonstrate the effectiveness of our method, which significantly outperforms previous state-of-the-art methods on both datasets.",
}
| Conventional approaches to relation extraction can only recognize predefined relation types. In the real world, new or out-of-scope relation types may keep challenging the deployed models. In this paper, we formalize such a challenging problem as Novel Relation Detection (NRD), which aims to discover potential new relation types based on training samples of known relations. To this end, we construct two NRD datasets and exhaustively investigate a variety of out-of-scope detection methods. We further propose an effective NRD method that utilizes multi-strategy self-supervised learning to handle the problem of shallow semantic similarity in the NRD task. Experimental results demonstrate the effectiveness of our method, which significantly outperforms previous state-of-the-art methods on both datasets. | [
"Liu, Qingbin",
"Kung, Yin",
"Hao, Yanchao",
"Sui, Dianbo",
"Cheng, Siyuan",
"Chen, Xi",
"Zhang, Ningyu",
"Chen, Jiaoyan"
] | Novel Relation Detection: Discovering Unknown Relation Types via Multi-Strategy Self-Supervised Learning | findings-emnlp.211 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.212.bib | https://aclanthology.org/2023.findings-emnlp.212/ | @inproceedings{bolding-etal-2023-ask,
title = "Ask Language Model to Clean Your Noisy Translation Data",
author = "Bolding, Quinten and
Liao, Baohao and
Denis, Brandon and
Luo, Jun and
Monz, Christof",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.212",
doi = "10.18653/v1/2023.findings-emnlp.212",
pages = "3215--3236",
abstract = "TTransformer models have demonstrated remarkable performance in neural machine translation (NMT). However, their vulnerability to noisy input poses a significant challenge in practical implementation, where generating clean output from noisy input is crucial. The MTNT dataset is widely used as a benchmark for evaluating the robustness of NMT models against noisy input. Nevertheless, its utility is limited due to the presence of noise in both the source and target sentences. To address this limitation, we focus on cleaning the noise from the target sentences in MTNT, making it more suitable as a benchmark for noise evaluation. Leveraging the capabilities of large language models (LLMs), we observe their impressive abilities in noise removal. For example, they can remove emojis while considering their semantic meaning. Additionally, we show that LLM can effectively rephrase slang, jargon, and profanities. The resulting datasets, called C-MTNT, exhibit significantly less noise in the target sentences while preserving the semantic integrity of the original sentences. Our human and GPT-4 evaluations also lead to a consistent conclusion that LLM performs well on this task. Lastly, experiments on C-MTNT showcased its effectiveness in evaluating the robustness of NMT models, highlighting the potential of advanced language models for data cleaning and emphasizing C-MTNT as a valuable resource.",
}
| TTransformer models have demonstrated remarkable performance in neural machine translation (NMT). However, their vulnerability to noisy input poses a significant challenge in practical implementation, where generating clean output from noisy input is crucial. The MTNT dataset is widely used as a benchmark for evaluating the robustness of NMT models against noisy input. Nevertheless, its utility is limited due to the presence of noise in both the source and target sentences. To address this limitation, we focus on cleaning the noise from the target sentences in MTNT, making it more suitable as a benchmark for noise evaluation. Leveraging the capabilities of large language models (LLMs), we observe their impressive abilities in noise removal. For example, they can remove emojis while considering their semantic meaning. Additionally, we show that LLM can effectively rephrase slang, jargon, and profanities. The resulting datasets, called C-MTNT, exhibit significantly less noise in the target sentences while preserving the semantic integrity of the original sentences. Our human and GPT-4 evaluations also lead to a consistent conclusion that LLM performs well on this task. Lastly, experiments on C-MTNT showcased its effectiveness in evaluating the robustness of NMT models, highlighting the potential of advanced language models for data cleaning and emphasizing C-MTNT as a valuable resource. | [
"Bolding, Quinten",
"Liao, Baohao",
"Denis, Br",
"on",
"Luo, Jun",
"Monz, Christof"
] | Ask Language Model to Clean Your Noisy Translation Data | findings-emnlp.212 | 2310.13469 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.213.bib | https://aclanthology.org/2023.findings-emnlp.213/ | @inproceedings{jo-etal-2023-multi,
title = "Multi-User {M}ulti{WOZ}: Task-Oriented Dialogues among Multiple Users",
author = "Jo, Yohan and
Zhao, Xinyan and
Biswas, Arijit and
Basiou, Nikoletta and
Auvray, Vincent and
Malandrakis, Nikolaos and
Metallinou, Angeliki and
Potamianos, Alexandros",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.213",
doi = "10.18653/v1/2023.findings-emnlp.213",
pages = "3237--3269",
abstract = "While most task-oriented dialogues assume conversations between the agent and one user at a time, dialogue systems are increasingly expected to communicate with multiple users simultaneously who make decisions collaboratively. To facilitate development of such systems, we release the Multi-User MultiWOZ dataset: task-oriented dialogues among two users and one agent. To collect this dataset, each user utterance from MultiWOZ 2.2 was replaced with a small chat between two users that is semantically and pragmatically consistent with the original user utterance, thus resulting in the same dialogue state and system response. These dialogues reflect interesting dynamics of collaborative decision-making in task-oriented scenarios, e.g., social chatter and deliberation. Supported by this data, we propose the novel task of multi-user contextual query rewriting: to rewrite a task-oriented chat between two users as a concise task-oriented query that retains only task-relevant information and that is directly consumable by the dialogue system. We demonstrate that in multi-user dialogues, using predicted rewrites substantially improves dialogue state tracking without modifying existing dialogue systems that are trained for single-user dialogues. Further, this method surpasses training a medium-sized model directly on multi-user dialogues and generalizes to unseen domains.",
}
| While most task-oriented dialogues assume conversations between the agent and one user at a time, dialogue systems are increasingly expected to communicate with multiple users simultaneously who make decisions collaboratively. To facilitate development of such systems, we release the Multi-User MultiWOZ dataset: task-oriented dialogues among two users and one agent. To collect this dataset, each user utterance from MultiWOZ 2.2 was replaced with a small chat between two users that is semantically and pragmatically consistent with the original user utterance, thus resulting in the same dialogue state and system response. These dialogues reflect interesting dynamics of collaborative decision-making in task-oriented scenarios, e.g., social chatter and deliberation. Supported by this data, we propose the novel task of multi-user contextual query rewriting: to rewrite a task-oriented chat between two users as a concise task-oriented query that retains only task-relevant information and that is directly consumable by the dialogue system. We demonstrate that in multi-user dialogues, using predicted rewrites substantially improves dialogue state tracking without modifying existing dialogue systems that are trained for single-user dialogues. Further, this method surpasses training a medium-sized model directly on multi-user dialogues and generalizes to unseen domains. | [
"Jo, Yohan",
"Zhao, Xinyan",
"Biswas, Arijit",
"Basiou, Nikoletta",
"Auvray, Vincent",
"Mal",
"rakis, Nikolaos",
"Metallinou, Angeliki",
"Potamianos, Alex",
"ros"
] | Multi-User MultiWOZ: Task-Oriented Dialogues among Multiple Users | findings-emnlp.213 | 2310.20479 | [
"https://github.com/yohanjo/multiuser_multiwoz"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.214.bib | https://aclanthology.org/2023.findings-emnlp.214/ | @inproceedings{zhang-etal-2023-extractive-summarization,
title = "Extractive Summarization via {C}hat{GPT} for Faithful Summary Generation",
author = "Zhang, Haopeng and
Liu, Xiao and
Zhang, Jiawei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.214",
doi = "10.18653/v1/2023.findings-emnlp.214",
pages = "3270--3278",
abstract = "Extractive summarization is a crucial task in natural language processing that aims to condense long documents into shorter versions by directly extracting sentences. The recent introduction of large language models has attracted significant interest in the NLP community due to its remarkable performance on a wide range of downstream tasks. This paper first presents a thorough evaluation of ChatGPT{'}s performance on extractive summarization and compares it with traditional fine-tuning methods on various benchmark datasets. Our experimental analysis reveals that ChatGPT exhibits inferior extractive summarization performance in terms of ROUGE scores compared to existing supervised systems, while achieving higher performance based on LLM-based evaluation metrics. In addition, we explore the effectiveness of in-context learning and chain-of-thought reasoning for enhancing its performance. Furthermore, we find that applying an extract-then-generate pipeline with ChatGPT yields significant performance improvements over abstractive baselines in terms of summary faithfulness. These observations highlight potential directions for enhancing ChatGPT{'}s capabilities in faithful summarization using two-stage approaches.",
}
| Extractive summarization is a crucial task in natural language processing that aims to condense long documents into shorter versions by directly extracting sentences. The recent introduction of large language models has attracted significant interest in the NLP community due to its remarkable performance on a wide range of downstream tasks. This paper first presents a thorough evaluation of ChatGPT{'}s performance on extractive summarization and compares it with traditional fine-tuning methods on various benchmark datasets. Our experimental analysis reveals that ChatGPT exhibits inferior extractive summarization performance in terms of ROUGE scores compared to existing supervised systems, while achieving higher performance based on LLM-based evaluation metrics. In addition, we explore the effectiveness of in-context learning and chain-of-thought reasoning for enhancing its performance. Furthermore, we find that applying an extract-then-generate pipeline with ChatGPT yields significant performance improvements over abstractive baselines in terms of summary faithfulness. These observations highlight potential directions for enhancing ChatGPT{'}s capabilities in faithful summarization using two-stage approaches. | [
"Zhang, Haopeng",
"Liu, Xiao",
"Zhang, Jiawei"
] | Extractive Summarization via ChatGPT for Faithful Summary Generation | findings-emnlp.214 | 2304.04193 | [
""
] | https://huggingface.co/papers/2304.04193 | 1 | 1 | 0 | 3 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.215.bib | https://aclanthology.org/2023.findings-emnlp.215/ | @inproceedings{chen-etal-2023-mapo,
title = "{MAPO}: Boosting Large Language Model Performance with Model-Adaptive Prompt Optimization",
author = "Chen, Yuyan and
Wen, Zhihao and
Fan, Ge and
Chen, Zhengyu and
Wu, Wei and
Liu, Dayiheng and
Li, Zhixu and
Liu, Bang and
Xiao, Yanghua",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.215",
doi = "10.18653/v1/2023.findings-emnlp.215",
pages = "3279--3304",
abstract = "Prompt engineering, as an efficient and effective way to leverage Large Language Models (LLM), has drawn a lot of attention from the research community. The existing research primarily emphasizes the importance of adapting prompts to specific tasks, rather than specific LLMs. However, a good prompt is not solely defined by its wording, but also binds to the nature of the LLM in question. In this work, we first quantitatively demonstrate that different prompts should be adapted to different LLMs to enhance their capabilities across various downstream tasks in NLP. Then we novelly propose a model-adaptive prompt optimizer (MAPO) method that optimizes the original prompts for each specific LLM in downstream tasks. Extensive experiments indicate that the proposed method can effectively refine prompts for an LLM, leading to significant improvements over various downstream tasks.",
}
| Prompt engineering, as an efficient and effective way to leverage Large Language Models (LLM), has drawn a lot of attention from the research community. The existing research primarily emphasizes the importance of adapting prompts to specific tasks, rather than specific LLMs. However, a good prompt is not solely defined by its wording, but also binds to the nature of the LLM in question. In this work, we first quantitatively demonstrate that different prompts should be adapted to different LLMs to enhance their capabilities across various downstream tasks in NLP. Then we novelly propose a model-adaptive prompt optimizer (MAPO) method that optimizes the original prompts for each specific LLM in downstream tasks. Extensive experiments indicate that the proposed method can effectively refine prompts for an LLM, leading to significant improvements over various downstream tasks. | [
"Chen, Yuyan",
"Wen, Zhihao",
"Fan, Ge",
"Chen, Zhengyu",
"Wu, Wei",
"Liu, Dayiheng",
"Li, Zhixu",
"Liu, Bang",
"Xiao, Yanghua"
] | MAPO: Boosting Large Language Model Performance with Model-Adaptive Prompt Optimization | findings-emnlp.215 | 2407.04118 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.216.bib | https://aclanthology.org/2023.findings-emnlp.216/ | @inproceedings{yang-etal-2023-psycot,
title = "{P}sy{C}o{T}: Psychological Questionnaire as Powerful Chain-of-Thought for Personality Detection",
author = "Yang, Tao and
Shi, Tianyuan and
Wan, Fanqi and
Quan, Xiaojun and
Wang, Qifan and
Wu, Bingzhe and
Wu, Jiaxiang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.216",
doi = "10.18653/v1/2023.findings-emnlp.216",
pages = "3305--3320",
abstract = "Recent advances in large language models (LLMs), such as ChatGPT, have showcased remarkable zero-shot performance across various NLP tasks. However, the potential of LLMs in personality detection, which involves identifying an individual{'}s personality from their written texts, remains largely unexplored. Drawing inspiration from Psychological Questionnaires, which are carefully designed by psychologists to evaluate individual personality traits through a series of targeted items, we argue that these items can be regarded as a collection of well-structured chain-of-thought (CoT) processes. By incorporating these processes, LLMs can enhance their capabilities to make more reasonable inferences on personality from textual input. In light of this, we propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner. In particular, we employ a LLM as an AI assistant with a specialization in text analysis. We prompt the assistant to rate individual items at each turn and leverage the historical rating results to derive a conclusive personality preference. Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection, achieving an average F1 score improvement of 4.23/10.63 points on two benchmark datasets compared to the standard prompting method. Our code is available at \url{https://github.com/TaoYang225/PsyCoT}.",
}
| Recent advances in large language models (LLMs), such as ChatGPT, have showcased remarkable zero-shot performance across various NLP tasks. However, the potential of LLMs in personality detection, which involves identifying an individual{'}s personality from their written texts, remains largely unexplored. Drawing inspiration from Psychological Questionnaires, which are carefully designed by psychologists to evaluate individual personality traits through a series of targeted items, we argue that these items can be regarded as a collection of well-structured chain-of-thought (CoT) processes. By incorporating these processes, LLMs can enhance their capabilities to make more reasonable inferences on personality from textual input. In light of this, we propose a novel personality detection method, called PsyCoT, which mimics the way individuals complete psychological questionnaires in a multi-turn dialogue manner. In particular, we employ a LLM as an AI assistant with a specialization in text analysis. We prompt the assistant to rate individual items at each turn and leverage the historical rating results to derive a conclusive personality preference. Our experiments demonstrate that PsyCoT significantly improves the performance and robustness of GPT-3.5 in personality detection, achieving an average F1 score improvement of 4.23/10.63 points on two benchmark datasets compared to the standard prompting method. Our code is available at \url{https://github.com/TaoYang225/PsyCoT}. | [
"Yang, Tao",
"Shi, Tianyuan",
"Wan, Fanqi",
"Quan, Xiaojun",
"Wang, Qifan",
"Wu, Bingzhe",
"Wu, Jiaxiang"
] | PsyCoT: Psychological Questionnaire as Powerful Chain-of-Thought for Personality Detection | findings-emnlp.216 | 2310.20256 | [
"https://github.com/taoyang225/psycot"
] | https://huggingface.co/papers/2310.20256 | 2 | 0 | 0 | 7 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.217.bib | https://aclanthology.org/2023.findings-emnlp.217/ | @inproceedings{ding-etal-2023-harnessing,
title = "Harnessing the power of {LLM}s: Evaluating human-{AI} text co-creation through the lens of news headline generation",
author = "Ding, Zijian and
Smith-Renner, Alison and
Zhang, Wenjuan and
Tetreault, Joel and
Jaimes, Alejandro",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.217",
doi = "10.18653/v1/2023.findings-emnlp.217",
pages = "3321--3339",
abstract = "To explore how humans can best leverage LLMs for writing and how interacting with these models affects feelings of ownership and trust in the writing process, we compared common human-AI interaction types (e.g., guiding system, selecting from system outputs, post-editing outputs) in the context of LLM-assisted news headline generation. While LLMs alone can generate satisfactory news headlines, on average, human control is needed to fix undesirable model outputs. Of the interaction methods, guiding and selecting model output added the most benefit with the lowest cost (in time and effort). Further, AI assistance did not harm participants{'} perception of control compared to freeform editing.",
}
| To explore how humans can best leverage LLMs for writing and how interacting with these models affects feelings of ownership and trust in the writing process, we compared common human-AI interaction types (e.g., guiding system, selecting from system outputs, post-editing outputs) in the context of LLM-assisted news headline generation. While LLMs alone can generate satisfactory news headlines, on average, human control is needed to fix undesirable model outputs. Of the interaction methods, guiding and selecting model output added the most benefit with the lowest cost (in time and effort). Further, AI assistance did not harm participants{'} perception of control compared to freeform editing. | [
"Ding, Zijian",
"Smith-Renner, Alison",
"Zhang, Wenjuan",
"Tetreault, Joel",
"Jaimes, Alej",
"ro"
] | Harnessing the power of LLMs: Evaluating human-AI text co-creation through the lens of news headline generation | findings-emnlp.217 | 2310.10706 | [
"https://github.com/jsndg/emnlp23-llm-headline"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.218.bib | https://aclanthology.org/2023.findings-emnlp.218/ | @inproceedings{katz-etal-2023-neretrieve,
title = "{NER}etrieve: Dataset for Next Generation Named Entity Recognition and Retrieval",
author = "Katz, Uri and
Vetzler, Matan and
Cohen, Amir and
Goldberg, Yoav",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.218",
doi = "10.18653/v1/2023.findings-emnlp.218",
pages = "3340--3354",
abstract = "Recognizing entities in texts is a central need in many information-seeking scenarios, and indeed, Named Entity Recognition (NER) is arguably one of the most successful examples of a widely adopted NLP task and corresponding NLP technology. Recent advances in large language models (LLMs) appear to provide effective solutions (also) for NER tasks that were traditionally handled with dedicated models, often matching or surpassing the abilities of the dedicated models. Should NER be considered a solved problem? We argue to the contrary: the capabilities provided by LLMs are not the end of NER research, but rather an exciting beginning. They allow taking NER to the next level, tackling increasingly more useful, and increasingly more challenging, variants. We present three variants of the NER task, together with a dataset to support them. The first is a move towards more fine-grained{---}and intersectional{---}entity types. The second is a move towards zero-shot recognition and extraction of these fine-grained types based on entity-type labels. The third, and most challenging, is the move from the recognition setup to a novel retrieval setup, where the query is a zero-shot entity type, and the expected result is all the sentences from a large, pre-indexed corpus that contain entities of these types, and their corresponding spans. We show that all of these are far from being solved. We provide a large, silver-annotated corpus of 4 million paragraphs covering 500 entity types, to facilitate research towards all of these three goals.",
}
| Recognizing entities in texts is a central need in many information-seeking scenarios, and indeed, Named Entity Recognition (NER) is arguably one of the most successful examples of a widely adopted NLP task and corresponding NLP technology. Recent advances in large language models (LLMs) appear to provide effective solutions (also) for NER tasks that were traditionally handled with dedicated models, often matching or surpassing the abilities of the dedicated models. Should NER be considered a solved problem? We argue to the contrary: the capabilities provided by LLMs are not the end of NER research, but rather an exciting beginning. They allow taking NER to the next level, tackling increasingly more useful, and increasingly more challenging, variants. We present three variants of the NER task, together with a dataset to support them. The first is a move towards more fine-grained{---}and intersectional{---}entity types. The second is a move towards zero-shot recognition and extraction of these fine-grained types based on entity-type labels. The third, and most challenging, is the move from the recognition setup to a novel retrieval setup, where the query is a zero-shot entity type, and the expected result is all the sentences from a large, pre-indexed corpus that contain entities of these types, and their corresponding spans. We show that all of these are far from being solved. We provide a large, silver-annotated corpus of 4 million paragraphs covering 500 entity types, to facilitate research towards all of these three goals. | [
"Katz, Uri",
"Vetzler, Matan",
"Cohen, Amir",
"Goldberg, Yoav"
] | NERetrieve: Dataset for Next Generation Named Entity Recognition and Retrieval | findings-emnlp.218 | 2310.14282 | [
"https://github.com/katzurik/neretrieve"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.219.bib | https://aclanthology.org/2023.findings-emnlp.219/ | @inproceedings{liu-etal-2023-sweet,
title = "{SWEET} - Weakly Supervised Person Name Extraction for Fighting Human Trafficking",
author = "Liu, Javin and
Yu, Hao and
Sujaya, Vidya and
Nair, Pratheeksha and
Pelrine, Kellin and
Rabbany, Reihaneh",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.219",
doi = "10.18653/v1/2023.findings-emnlp.219",
pages = "3355--3367",
abstract = "In this work, we propose a weak supervision pipeline SWEET: Supervise Weakly for Entity Extraction to fight Trafficking for extracting person names from noisy escort advertisements. Our method combines the simplicity of rule-matching (through antirules, i.e., negated rules) and the generalizability of large language models fine-tuned on benchmark, domain-specific and synthetic datasets, treating them as weak labels. One of the major challenges in this domain is limited labeled data. SWEET addresses this by obtaining multiple weak labels through labeling functions and effectively aggregating them. SWEET outperforms the previous supervised SOTA method for this task by 9{\%} F1 score on domain data and better generalizes to common benchmark datasets. Furthermore, we also release HTGEN, a synthetically generated dataset of escort advertisements (built using ChatGPT) to facilitate further research within the community.",
}
| In this work, we propose a weak supervision pipeline SWEET: Supervise Weakly for Entity Extraction to fight Trafficking for extracting person names from noisy escort advertisements. Our method combines the simplicity of rule-matching (through antirules, i.e., negated rules) and the generalizability of large language models fine-tuned on benchmark, domain-specific and synthetic datasets, treating them as weak labels. One of the major challenges in this domain is limited labeled data. SWEET addresses this by obtaining multiple weak labels through labeling functions and effectively aggregating them. SWEET outperforms the previous supervised SOTA method for this task by 9{\%} F1 score on domain data and better generalizes to common benchmark datasets. Furthermore, we also release HTGEN, a synthetically generated dataset of escort advertisements (built using ChatGPT) to facilitate further research within the community. | [
"Liu, Javin",
"Yu, Hao",
"Sujaya, Vidya",
"Nair, Pratheeksha",
"Pelrine, Kellin",
"Rabbany, Reihaneh"
] | SWEET - Weakly Supervised Person Name Extraction for Fighting Human Trafficking | findings-emnlp.219 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
Subsets and Splits
No saved queries yet
Save your SQL queries to embed, download, and access them later. Queries will appear here once saved.