bibtex_url
stringlengths 41
53
| acl_proceedings
stringlengths 38
50
| bibtext
stringlengths 528
3.02k
| abstract
stringlengths 17
2.35k
| authors
listlengths 1
44
| title
stringlengths 18
190
| id
stringlengths 7
19
| arxiv_id
stringlengths 10
10
⌀ | GitHub
listlengths 1
1
| paper_page
stringclasses 528
values | n_linked_authors
int64 -1
15
| upvotes
int64 -1
77
| num_comments
int64 -1
10
| n_authors
int64 -1
52
| Models
listlengths 0
100
| Datasets
listlengths 0
15
| Spaces
listlengths 0
46
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.findings-emnlp.320.bib
|
https://aclanthology.org/2023.findings-emnlp.320/
|
@inproceedings{yang-etal-2023-exploiting-emotion,
title = "Exploiting Emotion-Semantic Correlations for Empathetic Response Generation",
author = "Yang, Zhou and
Ren, Zhaochun and
Yufeng, Wang and
Zhu, Xiaofei and
Chen, Zhihao and
Cai, Tiecheng and
Yunbing, Wu and
Su, Yisong and
Ju, Sibo and
Liao, Xiangwen",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.320",
doi = "10.18653/v1/2023.findings-emnlp.320",
pages = "4826--4837",
abstract = "Empathetic response generation aims to generate empathetic responses by understanding the speaker{'}s emotional feelings from the language of dialogue. Recent methods capture emotional words in the language of communicators and construct them as static vectors to perceive nuanced emotions. However, linguistic research has shown that emotional words in language are dynamic and have correlations with other grammar semantic roles, i.e., words with semantic meanings, in grammar. Previous methods overlook these two characteristics, which easily lead to misunderstandings of emotions and neglect of key semantics. To address this issue, we propose a dynamical Emotion-Semantic Correlation Model (ESCM) for empathetic dialogue generation tasks. ESCM constructs dynamic emotion-semantic vectors through the interaction of context and emotions. We introduce dependency trees to reflect the correlations between emotions and semantics. Based on dynamic emotion-semantic vectors and dependency trees, we propose a dynamic correlation graph convolutional network to guide the model in learning context meanings in dialogue and generating empathetic responses. Experimental results on the EMPATHETIC-DIALOGUES dataset show that ESCM understands semantics and emotions more accurately and expresses fluent and informative empathetic responses. Our analysis results also indicate that the correlations between emotions and semantics are frequently used in dialogues, which is of great significance for empathetic perception and expression.",
}
|
Empathetic response generation aims to generate empathetic responses by understanding the speaker{'}s emotional feelings from the language of dialogue. Recent methods capture emotional words in the language of communicators and construct them as static vectors to perceive nuanced emotions. However, linguistic research has shown that emotional words in language are dynamic and have correlations with other grammar semantic roles, i.e., words with semantic meanings, in grammar. Previous methods overlook these two characteristics, which easily lead to misunderstandings of emotions and neglect of key semantics. To address this issue, we propose a dynamical Emotion-Semantic Correlation Model (ESCM) for empathetic dialogue generation tasks. ESCM constructs dynamic emotion-semantic vectors through the interaction of context and emotions. We introduce dependency trees to reflect the correlations between emotions and semantics. Based on dynamic emotion-semantic vectors and dependency trees, we propose a dynamic correlation graph convolutional network to guide the model in learning context meanings in dialogue and generating empathetic responses. Experimental results on the EMPATHETIC-DIALOGUES dataset show that ESCM understands semantics and emotions more accurately and expresses fluent and informative empathetic responses. Our analysis results also indicate that the correlations between emotions and semantics are frequently used in dialogues, which is of great significance for empathetic perception and expression.
|
[
"Yang, Zhou",
"Ren, Zhaochun",
"Yufeng, Wang",
"Zhu, Xiaofei",
"Chen, Zhihao",
"Cai, Tiecheng",
"Yunbing, Wu",
"Su, Yisong",
"Ju, Sibo",
"Liao, Xiangwen"
] |
Exploiting Emotion-Semantic Correlations for Empathetic Response Generation
|
findings-emnlp.320
|
2402.17437
|
[
"https://github.com/zhouzhouyang520/empatheticdialoguegeneration_escm"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.321.bib
|
https://aclanthology.org/2023.findings-emnlp.321/
|
@inproceedings{huang-hollenstein-2023-long,
title = "Long-Range Language Modeling with Selective Cache",
author = "Huang, Xinting and
Hollenstein, Nora",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.321",
doi = "10.18653/v1/2023.findings-emnlp.321",
pages = "4838--4858",
abstract = "The computational cost of transformer-based language models grows quadratically with the sequence length. In this paper, we introduce the selective cache, which stores the selected key-value pairs from the previous context. By selecting important key-value pairs the model makes better use of the cache so that in limited cache size, a longer context history can be stored. We design three kinds of selection methods. The first is based on human language processing. The key-value pairs are selected if they correspond to tokens that are fixated longer, as recorded in eye-tracking-while-reading experiments. We also incorporate the cognitively-inspired selection process into the language model as a trainable process, resulting in two additional methods with improved performance. The selection task is converted into a pruning task so they can be trained with differentiable masks. We demonstrate that the proposed selective cache improves the language modeling performance across different datasets. With the same number of stored key-value pairs (cache size), our selective cache outperforms XL cache and compressive cache by considerable margins.",
}
|
The computational cost of transformer-based language models grows quadratically with the sequence length. In this paper, we introduce the selective cache, which stores the selected key-value pairs from the previous context. By selecting important key-value pairs the model makes better use of the cache so that in limited cache size, a longer context history can be stored. We design three kinds of selection methods. The first is based on human language processing. The key-value pairs are selected if they correspond to tokens that are fixated longer, as recorded in eye-tracking-while-reading experiments. We also incorporate the cognitively-inspired selection process into the language model as a trainable process, resulting in two additional methods with improved performance. The selection task is converted into a pruning task so they can be trained with differentiable masks. We demonstrate that the proposed selective cache improves the language modeling performance across different datasets. With the same number of stored key-value pairs (cache size), our selective cache outperforms XL cache and compressive cache by considerable margins.
|
[
"Huang, Xinting",
"Hollenstein, Nora"
] |
Long-Range Language Modeling with Selective Cache
|
findings-emnlp.321
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.322.bib
|
https://aclanthology.org/2023.findings-emnlp.322/
|
@inproceedings{flores-etal-2023-medical,
title = "Medical Text Simplification: Optimizing for Readability with Unlikelihood Training and Reranked Beam Search Decoding",
author = "Flores, Lorenzo Jaime and
Huang, Heyuan and
Shi, Kejian and
Chheang, Sophie and
Cohan, Arman",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.322",
doi = "10.18653/v1/2023.findings-emnlp.322",
pages = "4859--4873",
abstract = "Text simplification has emerged as an increasingly useful application of AI for bridging the communication gap in specialized fields such as medicine, where the lexicon is often dominated by technical jargon and complex constructs. Despite notable progress, methods in medical simplification sometimes result in the generated text having lower quality and diversity. In this work, we explore ways to further improve the readability of text simplification in the medical domain. We propose (1) a new unlikelihood loss that encourages generation of simpler terms and (2) a reranked beam search decoding method that optimizes for simplicity, which achieve better performance on readability metrics on three datasets. This study{'}s findings offer promising avenues for improving text simplification in the medical field.",
}
|
Text simplification has emerged as an increasingly useful application of AI for bridging the communication gap in specialized fields such as medicine, where the lexicon is often dominated by technical jargon and complex constructs. Despite notable progress, methods in medical simplification sometimes result in the generated text having lower quality and diversity. In this work, we explore ways to further improve the readability of text simplification in the medical domain. We propose (1) a new unlikelihood loss that encourages generation of simpler terms and (2) a reranked beam search decoding method that optimizes for simplicity, which achieve better performance on readability metrics on three datasets. This study{'}s findings offer promising avenues for improving text simplification in the medical field.
|
[
"Flores, Lorenzo Jaime",
"Huang, Heyuan",
"Shi, Kejian",
"Chheang, Sophie",
"Cohan, Arman"
] |
Medical Text Simplification: Optimizing for Readability with Unlikelihood Training and Reranked Beam Search Decoding
|
findings-emnlp.322
|
2310.11191
|
[
"https://github.com/ljyflores/simplification-project"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.323.bib
|
https://aclanthology.org/2023.findings-emnlp.323/
|
@inproceedings{huang-etal-2023-fala,
title = "{F}a{LA}: Fast Linear Adaptation for Replacing Backbone Models on Edge Devices",
author = "Huang, Shuo and
Qu, Lizhen and
Yuan, Xingliang and
Chen, Chunyang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.323",
doi = "10.18653/v1/2023.findings-emnlp.323",
pages = "4874--4885",
abstract = "In this work, we study the language model backbone replacement problem for personalized downstream tasks in a non-stationary on-device scenario. In real world, company may periodically update the knowledge and architectures of backbones to keep the competitive in the market, meanwhile, to accommodate the users{'} own preference, models are personalized to fit users{'} own distribution locally. Traditional full model tuning or transfer learning for such replacements often incur considerable local device training costs and necessitate extensive backpropagation within deep transformer layers. Addressing this issue, we propose a novel, lightweight tuning method for personalized NLP classification tasks post-backbone replacement. Our approach leverages a personalized matrix calculated from documents corresponding to users{'} old and new backbones. This matrix facilitates top-layer parameter tuning, drastically reducing backpropagation computation. To further mitigate training costs associated with matrix linear optimization, we employ correlation clustering to curate a few examples from personalized cluster sets for individuals. Our method achieves over 1000 times computation reduction in Flops for backpropagation and brings the user-specific initialization for personal matrix yielding significant performance boost compared with popular transfer learning methods.",
}
|
In this work, we study the language model backbone replacement problem for personalized downstream tasks in a non-stationary on-device scenario. In real world, company may periodically update the knowledge and architectures of backbones to keep the competitive in the market, meanwhile, to accommodate the users{'} own preference, models are personalized to fit users{'} own distribution locally. Traditional full model tuning or transfer learning for such replacements often incur considerable local device training costs and necessitate extensive backpropagation within deep transformer layers. Addressing this issue, we propose a novel, lightweight tuning method for personalized NLP classification tasks post-backbone replacement. Our approach leverages a personalized matrix calculated from documents corresponding to users{'} old and new backbones. This matrix facilitates top-layer parameter tuning, drastically reducing backpropagation computation. To further mitigate training costs associated with matrix linear optimization, we employ correlation clustering to curate a few examples from personalized cluster sets for individuals. Our method achieves over 1000 times computation reduction in Flops for backpropagation and brings the user-specific initialization for personal matrix yielding significant performance boost compared with popular transfer learning methods.
|
[
"Huang, Shuo",
"Qu, Lizhen",
"Yuan, Xingliang",
"Chen, Chunyang"
] |
FaLA: Fast Linear Adaptation for Replacing Backbone Models on Edge Devices
|
findings-emnlp.323
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.324.bib
|
https://aclanthology.org/2023.findings-emnlp.324/
|
@inproceedings{hong-etal-2023-intuitive,
title = "Intuitive Multilingual Audio-Visual Speech Recognition with a Single-Trained Model",
author = "Hong, Joanna and
Park, Se and
Ro, Yong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.324",
doi = "10.18653/v1/2023.findings-emnlp.324",
pages = "4886--4890",
abstract = "We present a novel approach to multilingual audio-visual speech recognition tasks by introducing a single model on a multilingual dataset. Motivated by a human cognitive system where humans can intuitively distinguish different languages without any conscious effort or guidance, we propose a model that can capture which language is given as an input speech by distinguishing the inherent similarities and differences between languages. To do so, we design a prompt fine-tuning technique into the largely pre-trained audio-visual representation model so that the network can recognize the language class as well as the speech with the corresponding language. Our work contributes to developing robust and efficient multilingual audio-visual speech recognition systems, reducing the need for language-specific models.",
}
|
We present a novel approach to multilingual audio-visual speech recognition tasks by introducing a single model on a multilingual dataset. Motivated by a human cognitive system where humans can intuitively distinguish different languages without any conscious effort or guidance, we propose a model that can capture which language is given as an input speech by distinguishing the inherent similarities and differences between languages. To do so, we design a prompt fine-tuning technique into the largely pre-trained audio-visual representation model so that the network can recognize the language class as well as the speech with the corresponding language. Our work contributes to developing robust and efficient multilingual audio-visual speech recognition systems, reducing the need for language-specific models.
|
[
"Hong, Joanna",
"Park, Se",
"Ro, Yong"
] |
Intuitive Multilingual Audio-Visual Speech Recognition with a Single-Trained Model
|
findings-emnlp.324
|
2310.14946
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.325.bib
|
https://aclanthology.org/2023.findings-emnlp.325/
|
@inproceedings{serra-etal-2023-controllable,
title = "Controllable Chest {X}-Ray Report Generation from Longitudinal Representations",
author = "Dalla Serra, Francesco and
Wang, Chaoyang and
Deligianni, Fani and
Dalton, Jeff and
O{'}Neil, Alison",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.325",
doi = "10.18653/v1/2023.findings-emnlp.325",
pages = "4891--4904",
abstract = "Radiology reports are detailed text descriptions of the content of medical scans. Each report describes the presence/absence and location of relevant clinical findings, commonly including comparison with prior exams of the same patient to describe how they evolved. Radiology reporting is a time-consuming process, and scan results are often subject to delays. One strategy to speed up reporting is to integrate automated reporting systems, however clinical deployment requires high accuracy and interpretability. Previous approaches to automated radiology reporting generally do not provide the prior study as input, precluding comparison which is required for clinical accuracy in some types of scans, and offer only unreliable methods of interpretability. Therefore, leveraging an existing visual input format of anatomical tokens, we introduce two novel aspects: (1) longitudinal representation learning {--} we input the prior scan as an additional input, proposing a method to align, concatenate and fuse the current and prior visual information into a joint longitudinal representation which can be provided to the multimodal report generation model; (2) sentence-anatomy dropout {--} a training strategy for controllability in which the report generator model is trained to predict only sentences from the original report which correspond to the subset of anatomical regions given as input. We show through in-depth experiments on the MIMIC-CXR dataset how the proposed approach achieves state-of-the-art results while enabling anatomy-wise controllable report generation.",
}
|
Radiology reports are detailed text descriptions of the content of medical scans. Each report describes the presence/absence and location of relevant clinical findings, commonly including comparison with prior exams of the same patient to describe how they evolved. Radiology reporting is a time-consuming process, and scan results are often subject to delays. One strategy to speed up reporting is to integrate automated reporting systems, however clinical deployment requires high accuracy and interpretability. Previous approaches to automated radiology reporting generally do not provide the prior study as input, precluding comparison which is required for clinical accuracy in some types of scans, and offer only unreliable methods of interpretability. Therefore, leveraging an existing visual input format of anatomical tokens, we introduce two novel aspects: (1) longitudinal representation learning {--} we input the prior scan as an additional input, proposing a method to align, concatenate and fuse the current and prior visual information into a joint longitudinal representation which can be provided to the multimodal report generation model; (2) sentence-anatomy dropout {--} a training strategy for controllability in which the report generator model is trained to predict only sentences from the original report which correspond to the subset of anatomical regions given as input. We show through in-depth experiments on the MIMIC-CXR dataset how the proposed approach achieves state-of-the-art results while enabling anatomy-wise controllable report generation.
|
[
"Dalla Serra, Francesco",
"Wang, Chaoyang",
"Deligianni, Fani",
"Dalton, Jeff",
"O{'}Neil, Alison"
] |
Controllable Chest X-Ray Report Generation from Longitudinal Representations
|
findings-emnlp.325
|
2310.05881
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.326.bib
|
https://aclanthology.org/2023.findings-emnlp.326/
|
@inproceedings{tan-etal-2023-chatgpt,
title = "Is {C}hat{GPT} a Good Multi-Party Conversation Solver?",
author = "Tan, Chao-Hong and
Gu, Jia-Chen and
Ling, Zhen-Hua",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.326",
doi = "10.18653/v1/2023.findings-emnlp.326",
pages = "4905--4915",
abstract = "Large Language Models (LLMs) have emerged as influential instruments within the realm of natural language processing; nevertheless, their capacity to handle multi-party conversations (MPCs) {--} a scenario marked by the presence of multiple interlocutors involved in intricate information exchanges {--} remains uncharted. In this paper, we delve into the potential of generative LLMs such as ChatGPT and GPT-4 within the context of MPCs. An empirical analysis is conducted to assess the zero-shot learning capabilities of ChatGPT and GPT-4 by subjecting them to evaluation across three MPC datasets that encompass five representative tasks. The findings reveal that ChatGPT{'}s performance on a number of evaluated MPC tasks leaves much to be desired, whilst GPT-4{'}s results portend a promising future. Additionally, we endeavor to bolster performance through the incorporation of MPC structures, encompassing both speaker and addressee architecture. This study provides an exhaustive evaluation and analysis of applying generative LLMs to MPCs, casting a light upon the conception and creation of increasingly effective and robust MPC agents. Concurrently, this work underscores the challenges implicit in the utilization of LLMs for MPCs, such as deciphering graphical information flows and generating stylistically consistent responses.",
}
|
Large Language Models (LLMs) have emerged as influential instruments within the realm of natural language processing; nevertheless, their capacity to handle multi-party conversations (MPCs) {--} a scenario marked by the presence of multiple interlocutors involved in intricate information exchanges {--} remains uncharted. In this paper, we delve into the potential of generative LLMs such as ChatGPT and GPT-4 within the context of MPCs. An empirical analysis is conducted to assess the zero-shot learning capabilities of ChatGPT and GPT-4 by subjecting them to evaluation across three MPC datasets that encompass five representative tasks. The findings reveal that ChatGPT{'}s performance on a number of evaluated MPC tasks leaves much to be desired, whilst GPT-4{'}s results portend a promising future. Additionally, we endeavor to bolster performance through the incorporation of MPC structures, encompassing both speaker and addressee architecture. This study provides an exhaustive evaluation and analysis of applying generative LLMs to MPCs, casting a light upon the conception and creation of increasingly effective and robust MPC agents. Concurrently, this work underscores the challenges implicit in the utilization of LLMs for MPCs, such as deciphering graphical information flows and generating stylistically consistent responses.
|
[
"Tan, Chao-Hong",
"Gu, Jia-Chen",
"Ling, Zhen-Hua"
] |
Is ChatGPT a Good Multi-Party Conversation Solver?
|
findings-emnlp.326
| null |
[
"https://github.com/lxchtan/chatmpc"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.327.bib
|
https://aclanthology.org/2023.findings-emnlp.327/
|
@inproceedings{lu-etal-2023-improving,
title = "Improving End-to-End Speech Processing by Efficient Text Data Utilization with Latent Synthesis",
author = "Lu, Jianqiao and
Huang, Wenyong and
Zheng, Nianzu and
Zeng, Xingshan and
Yeung, Yu and
Chen, Xiao",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.327",
doi = "10.18653/v1/2023.findings-emnlp.327",
pages = "4916--4928",
abstract = "Training a high performance end-to-end speech (E2E) processing model requires an enormous amount of labeled speech data, especially in the era of data-centric artificial intelligence. However, labeled speech data are usually scarcer and more expensive for collection, compared to textual data. We propose Latent Synthesis (LaSyn), an efficient textual data utilization framework for E2E speech processing models. We train a latent synthesizer to convert textual data into an intermediate latent representation of a pre-trained speech model. These pseudo acoustic representations of textual data augment acoustic data for model training. We evaluate LaSyn on low-resource automatic speech recognition (ASR) and spoken language understanding (SLU) tasks. For ASR, LaSyn improves an E2E baseline trained on LibriSpeech train-clean-100, with relative word error rate reductions over 22.3{\%} on different test sets. For SLU, LaSyn improves our E2E baseline by absolute 4.1{\%} for intent classification accuracy and 3.8{\%} for slot filling SLU-F1 on SLURP, and absolute 4.49{\%} and 2.25{\%} for exact match (EM) and EM-Tree accuracies on STOP respectively. With fewer parameters, the results of LaSyn are competitive to published state-of-the-art works. The results demonstrate the quality of the augmented training data.",
}
|
Training a high performance end-to-end speech (E2E) processing model requires an enormous amount of labeled speech data, especially in the era of data-centric artificial intelligence. However, labeled speech data are usually scarcer and more expensive for collection, compared to textual data. We propose Latent Synthesis (LaSyn), an efficient textual data utilization framework for E2E speech processing models. We train a latent synthesizer to convert textual data into an intermediate latent representation of a pre-trained speech model. These pseudo acoustic representations of textual data augment acoustic data for model training. We evaluate LaSyn on low-resource automatic speech recognition (ASR) and spoken language understanding (SLU) tasks. For ASR, LaSyn improves an E2E baseline trained on LibriSpeech train-clean-100, with relative word error rate reductions over 22.3{\%} on different test sets. For SLU, LaSyn improves our E2E baseline by absolute 4.1{\%} for intent classification accuracy and 3.8{\%} for slot filling SLU-F1 on SLURP, and absolute 4.49{\%} and 2.25{\%} for exact match (EM) and EM-Tree accuracies on STOP respectively. With fewer parameters, the results of LaSyn are competitive to published state-of-the-art works. The results demonstrate the quality of the augmented training data.
|
[
"Lu, Jianqiao",
"Huang, Wenyong",
"Zheng, Nianzu",
"Zeng, Xingshan",
"Yeung, Yu",
"Chen, Xiao"
] |
Improving End-to-End Speech Processing by Efficient Text Data Utilization with Latent Synthesis
|
findings-emnlp.327
|
2310.05374
|
[
""
] |
https://huggingface.co/papers/2310.05374
| 0 | 0 | 0 | 6 |
[] |
[] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.328.bib
|
https://aclanthology.org/2023.findings-emnlp.328/
|
@inproceedings{mao-etal-2023-bipartite,
title = "Bipartite Graph Pre-training for Unsupervised Extractive Summarization with Graph Convolutional Auto-Encoders",
author = "Mao, Qianren and
Zhao, Shaobo and
Li, Jiarui and
Gu, Xiaolei and
He, Shizhu and
Li, Bo and
Li, Jianxin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.328",
doi = "10.18653/v1/2023.findings-emnlp.328",
pages = "4929--4941",
abstract = "Pre-trained sentence representations are crucial for identifying significant sentences in unsupervised document extractive summarization. However, the traditional two-step paradigm of pre-training and sentence-ranking, creates a gap due to differing optimization objectives. To address this issue, we argue that utilizing pre-trained embeddings derived from a process specifically designed to optimize informative and distinctive sentence representations helps rank significant sentences. To do so, we propose a novel graph pre-training auto-encoder to obtain sentence embeddings by explicitly modelling intra-sentential distinctive features and inter-sentential cohesive features through sentence-word bipartite graphs. These fine-tuned sentence embeddings are then utilized in a graph-based ranking algorithm for unsupervised summarization. Our method is a plug-and-play pre-trained model that produces predominant performance for unsupervised summarization frameworks by providing summary-worthy sentence representations. It surpasses heavy BERT- or RoBERTa-based sentence representations in downstream tasks.",
}
|
Pre-trained sentence representations are crucial for identifying significant sentences in unsupervised document extractive summarization. However, the traditional two-step paradigm of pre-training and sentence-ranking, creates a gap due to differing optimization objectives. To address this issue, we argue that utilizing pre-trained embeddings derived from a process specifically designed to optimize informative and distinctive sentence representations helps rank significant sentences. To do so, we propose a novel graph pre-training auto-encoder to obtain sentence embeddings by explicitly modelling intra-sentential distinctive features and inter-sentential cohesive features through sentence-word bipartite graphs. These fine-tuned sentence embeddings are then utilized in a graph-based ranking algorithm for unsupervised summarization. Our method is a plug-and-play pre-trained model that produces predominant performance for unsupervised summarization frameworks by providing summary-worthy sentence representations. It surpasses heavy BERT- or RoBERTa-based sentence representations in downstream tasks.
|
[
"Mao, Qianren",
"Zhao, Shaobo",
"Li, Jiarui",
"Gu, Xiaolei",
"He, Shizhu",
"Li, Bo",
"Li, Jianxin"
] |
Bipartite Graph Pre-training for Unsupervised Extractive Summarization with Graph Convolutional Auto-Encoders
|
findings-emnlp.328
|
2310.18992
|
[
"https://github.com/opensum/bigae"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.329.bib
|
https://aclanthology.org/2023.findings-emnlp.329/
|
@inproceedings{lee-etal-2023-bayesian,
title = "{B}ayesian Multi-Task Transfer Learning for Soft Prompt Tuning",
author = "Lee, Haeju and
Jeong, Minchan and
Yun, Se-Young and
Kim, Kee-Eung",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.329",
doi = "10.18653/v1/2023.findings-emnlp.329",
pages = "4942--4958",
abstract = "Prompt tuning, in which prompts are optimized to adapt large-scale pre-trained language models to downstream tasks instead of fine-tuning the full model parameters, has been shown to be particularly effective when the prompts are trained in the multi-task transfer learning setting. These methods generally involve individually training prompts for each source task and then aggregating them to provide the initialization of the prompt for the target task. However, this approach critically ignores the fact that some of the source tasks could be negatively or positively interfering with each other. We argue that when we extract knowledge from source tasks via training source prompts, we need to consider this correlation among source tasks for better transfer to target tasks. To this end, we propose a Bayesian approach where we work with the posterior distribution of prompts across source tasks. We obtain representative source prompts corresponding to the samples from the posterior utilizing Stein Variational Gradient Descent, which are then aggregated to constitute the initial target prompt. We show extensive experimental results on the standard benchmark NLP tasks, where our Bayesian multi-task transfer learning approach outperforms the state-of-the-art methods in many settings. Furthermore, our approach requires no auxiliary models other than the prompt itself, achieving high degree of parameter-efficiency.",
}
|
Prompt tuning, in which prompts are optimized to adapt large-scale pre-trained language models to downstream tasks instead of fine-tuning the full model parameters, has been shown to be particularly effective when the prompts are trained in the multi-task transfer learning setting. These methods generally involve individually training prompts for each source task and then aggregating them to provide the initialization of the prompt for the target task. However, this approach critically ignores the fact that some of the source tasks could be negatively or positively interfering with each other. We argue that when we extract knowledge from source tasks via training source prompts, we need to consider this correlation among source tasks for better transfer to target tasks. To this end, we propose a Bayesian approach where we work with the posterior distribution of prompts across source tasks. We obtain representative source prompts corresponding to the samples from the posterior utilizing Stein Variational Gradient Descent, which are then aggregated to constitute the initial target prompt. We show extensive experimental results on the standard benchmark NLP tasks, where our Bayesian multi-task transfer learning approach outperforms the state-of-the-art methods in many settings. Furthermore, our approach requires no auxiliary models other than the prompt itself, achieving high degree of parameter-efficiency.
|
[
"Lee, Haeju",
"Jeong, Minchan",
"Yun, Se-Young",
"Kim, Kee-Eung"
] |
Bayesian Multi-Task Transfer Learning for Soft Prompt Tuning
|
findings-emnlp.329
|
2402.08594
|
[
"https://github.com/heyzude/bmtpt"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.330.bib
|
https://aclanthology.org/2023.findings-emnlp.330/
|
@inproceedings{ma-etal-2023-ccim,
title = "{CCIM}: Cross-modal Cross-lingual Interactive Image Translation",
author = "Ma, Cong and
Zhang, Yaping and
Tu, Mei and
Zhao, Yang and
Zhou, Yu and
Zong, Chengqing",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.330",
doi = "10.18653/v1/2023.findings-emnlp.330",
pages = "4959--4965",
abstract = "Text image machine translation (TIMT) which translates source language text images into target language texts has attracted intensive attention in recent years. Although the end-to-end TIMT model directly generates target translation from encoded text image features with an efficient architecture, it lacks the recognized source language information resulting in a decrease in translation performance. In this paper, we propose a novel Cross-modal Cross-lingual Interactive Model (CCIM) to incorporate source language information by synchronously generating source language and target language results through an interactive attention mechanism between two language decoders. Extensive experimental results have shown the interactive decoder significantly outperforms end-to-end TIMT models and has faster decoding speed with smaller model size than cascade models.",
}
|
Text image machine translation (TIMT) which translates source language text images into target language texts has attracted intensive attention in recent years. Although the end-to-end TIMT model directly generates target translation from encoded text image features with an efficient architecture, it lacks the recognized source language information resulting in a decrease in translation performance. In this paper, we propose a novel Cross-modal Cross-lingual Interactive Model (CCIM) to incorporate source language information by synchronously generating source language and target language results through an interactive attention mechanism between two language decoders. Extensive experimental results have shown the interactive decoder significantly outperforms end-to-end TIMT models and has faster decoding speed with smaller model size than cascade models.
|
[
"Ma, Cong",
"Zhang, Yaping",
"Tu, Mei",
"Zhao, Yang",
"Zhou, Yu",
"Zong, Chengqing"
] |
CCIM: Cross-modal Cross-lingual Interactive Image Translation
|
findings-emnlp.330
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.331.bib
|
https://aclanthology.org/2023.findings-emnlp.331/
|
@inproceedings{yu-etal-2023-trams,
title = "{TRAMS}: Training-free Memory Selection for Long-range Language Modeling",
author = "Yu, Haofei and
Wang, Cunxiang and
Zhang, Yue and
Bi, Wei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.331",
doi = "10.18653/v1/2023.findings-emnlp.331",
pages = "4966--4972",
abstract = "The Transformer architecture is crucial for numerous AI models, but it still faces challenges in long-range language modeling. Though several specific transformer architectures have been designed to tackle issues of long-range dependencies, existing methods like Transformer-XL are plagued by a high percentage of ineffective memories. In this study, we present a plug-and-play strategy, known as TRAining-free Memory Selection (TRAMS), that selects tokens participating in attention calculation based on one simple metric. This strategy allows us to keep tokens that are likely to have a high attention score with the current queries and ignore the other ones. We have tested our approach on the word-level benchmark (WikiText-103) and the character-level benchmark (enwik8), and the results indicate an improvement without having additional training or adding additional parameters.",
}
|
The Transformer architecture is crucial for numerous AI models, but it still faces challenges in long-range language modeling. Though several specific transformer architectures have been designed to tackle issues of long-range dependencies, existing methods like Transformer-XL are plagued by a high percentage of ineffective memories. In this study, we present a plug-and-play strategy, known as TRAining-free Memory Selection (TRAMS), that selects tokens participating in attention calculation based on one simple metric. This strategy allows us to keep tokens that are likely to have a high attention score with the current queries and ignore the other ones. We have tested our approach on the word-level benchmark (WikiText-103) and the character-level benchmark (enwik8), and the results indicate an improvement without having additional training or adding additional parameters.
|
[
"Yu, Haofei",
"Wang, Cunxiang",
"Zhang, Yue",
"Bi, Wei"
] |
TRAMS: Training-free Memory Selection for Long-range Language Modeling
|
findings-emnlp.331
|
2310.15494
|
[
"https://github.com/lwaekfjlk/trams"
] |
https://huggingface.co/papers/2310.15494
| 1 | 1 | 1 | 4 |
[] |
[] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.332.bib
|
https://aclanthology.org/2023.findings-emnlp.332/
|
@inproceedings{gu-etal-2023-critical,
title = "A Critical Analysis of Document Out-of-Distribution Detection",
author = "Gu, Jiuxiang and
Ming, Yifei and
Zhou, Yi and
Kuen, Jason and
Morariu, Vlad and
Zhao, Handong and
Zhang, Ruiyi and
Barmpalios, Nikolaos and
Liu, Anqi and
Li, Yixuan and
Sun, Tong and
Nenkova, Ani",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.332",
doi = "10.18653/v1/2023.findings-emnlp.332",
pages = "4973--4999",
abstract = "Large-scale pre-training is widely used in recent document understanding tasks. During deployment, one may expect that models should trigger a conservative fallback policy when encountering out-of-distribution (OOD) samples, which highlights the importance of OOD detection. However, most existing OOD detection methods focus on single-modal inputs such as images or texts. While documents are multi-modal in nature, it is underexplored if and how multi-modal information in documents can be exploited for OOD detection. In this work, we first provide a systematic and in-depth analysis on OOD detection for document understanding models. We study the effects of model modality, pre-training, and fine-tuning across various types of OOD inputs. In particular, we find that spatial information is critical for document OOD detection. To better exploit spatial information, we propose a spatial-aware adapter, which serves as a parameter-efficient add-on module to adapt transformer-based language models to the document domain. Extensive experiments show that adding the spatial-aware adapter significantly improves the OOD detection performance compared to directly using the language model and achieves superior performance compared to competitive baselines.",
}
|
Large-scale pre-training is widely used in recent document understanding tasks. During deployment, one may expect that models should trigger a conservative fallback policy when encountering out-of-distribution (OOD) samples, which highlights the importance of OOD detection. However, most existing OOD detection methods focus on single-modal inputs such as images or texts. While documents are multi-modal in nature, it is underexplored if and how multi-modal information in documents can be exploited for OOD detection. In this work, we first provide a systematic and in-depth analysis on OOD detection for document understanding models. We study the effects of model modality, pre-training, and fine-tuning across various types of OOD inputs. In particular, we find that spatial information is critical for document OOD detection. To better exploit spatial information, we propose a spatial-aware adapter, which serves as a parameter-efficient add-on module to adapt transformer-based language models to the document domain. Extensive experiments show that adding the spatial-aware adapter significantly improves the OOD detection performance compared to directly using the language model and achieves superior performance compared to competitive baselines.
|
[
"Gu, Jiuxiang",
"Ming, Yifei",
"Zhou, Yi",
"Kuen, Jason",
"Morariu, Vlad",
"Zhao, H",
"ong",
"Zhang, Ruiyi",
"Barmpalios, Nikolaos",
"Liu, Anqi",
"Li, Yixuan",
"Sun, Tong",
"Nenkova, Ani"
] |
A Critical Analysis of Document Out-of-Distribution Detection
|
findings-emnlp.332
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.333.bib
|
https://aclanthology.org/2023.findings-emnlp.333/
|
@inproceedings{wang-etal-2023-improving-neural,
title = "Improving Neural Machine Translation by Multi-Knowledge Integration with Prompting",
author = "Wang, Ke and
Xie, Jun and
Zhang, Yuqi and
Zhao, Yu",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.333",
doi = "10.18653/v1/2023.findings-emnlp.333",
pages = "5000--5010",
abstract = "Improving neural machine translation (NMT) systems with prompting has achieved significant progress in recent years. In this work, we focus on how to integrate multi-knowledge, multiple types of knowledge, into NMT models to enhance the performance with prompting. We propose a unified framework, which can integrate effectively multiple types of knowledge including sentences, terminologies/phrases and translation templates into NMT models. We utilize multiple types of knowledge as prefix-prompts of input for the encoder and decoder of NMT models to guide the translation process. The approach requires no changes to the model architecture and effectively adapts to domain-specific translation without retraining. The experiments on English-Chinese and English-German translation demonstrate that our approach significantly outperform strong baselines, achieving high translation quality and terminology match accuracy.",
}
|
Improving neural machine translation (NMT) systems with prompting has achieved significant progress in recent years. In this work, we focus on how to integrate multi-knowledge, multiple types of knowledge, into NMT models to enhance the performance with prompting. We propose a unified framework, which can integrate effectively multiple types of knowledge including sentences, terminologies/phrases and translation templates into NMT models. We utilize multiple types of knowledge as prefix-prompts of input for the encoder and decoder of NMT models to guide the translation process. The approach requires no changes to the model architecture and effectively adapts to domain-specific translation without retraining. The experiments on English-Chinese and English-German translation demonstrate that our approach significantly outperform strong baselines, achieving high translation quality and terminology match accuracy.
|
[
"Wang, Ke",
"Xie, Jun",
"Zhang, Yuqi",
"Zhao, Yu"
] |
Improving Neural Machine Translation by Multi-Knowledge Integration with Prompting
|
findings-emnlp.333
|
2312.04807
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.334.bib
|
https://aclanthology.org/2023.findings-emnlp.334/
|
@inproceedings{margatina-etal-2023-active,
title = "Active Learning Principles for In-Context Learning with Large Language Models",
author = "Margatina, Katerina and
Schick, Timo and
Aletras, Nikolaos and
Dwivedi-Yu, Jane",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.334",
doi = "10.18653/v1/2023.findings-emnlp.334",
pages = "5011--5034",
abstract = "The remarkable advancements in large language models (LLMs) have significantly enhanced predictive performance in few-shot learning settings. By using only a small number of labeled examples, referred to as demonstrations, LLMs can effectively perform the task at hand through in-context learning. However, the process of selecting demonstrations for maximizing performance has received limited attention in prior work. This paper addresses the issue of identifying the most informative demonstrations for few-shot learning by approaching it as a pool-based Active Learning (AL) problem over a single iteration. We compare standard AL algorithms based on uncertainty, diversity, and similarity, and consistently observe that the latter outperforms all other methods, including random sampling. Our extensive experimentation involving a diverse range of GPT and OPT models across 24 classification and multi-choice tasks, coupled with thorough analysis, unambiguously demonstrates the importance of using demonstrations that are semantically similar to the domain of the test examples. In fact, we show higher average classification performance using {``}similar{''} demonstrations with GPT-2 (124M) than random demonstrations with GPT-Neox (20B). Notably, while diversity sampling shows promise, uncertainty sampling, despite its success in conventional supervised learning AL scenarios, performs poorly in in-context learning.",
}
|
The remarkable advancements in large language models (LLMs) have significantly enhanced predictive performance in few-shot learning settings. By using only a small number of labeled examples, referred to as demonstrations, LLMs can effectively perform the task at hand through in-context learning. However, the process of selecting demonstrations for maximizing performance has received limited attention in prior work. This paper addresses the issue of identifying the most informative demonstrations for few-shot learning by approaching it as a pool-based Active Learning (AL) problem over a single iteration. We compare standard AL algorithms based on uncertainty, diversity, and similarity, and consistently observe that the latter outperforms all other methods, including random sampling. Our extensive experimentation involving a diverse range of GPT and OPT models across 24 classification and multi-choice tasks, coupled with thorough analysis, unambiguously demonstrates the importance of using demonstrations that are semantically similar to the domain of the test examples. In fact, we show higher average classification performance using {``}similar{''} demonstrations with GPT-2 (124M) than random demonstrations with GPT-Neox (20B). Notably, while diversity sampling shows promise, uncertainty sampling, despite its success in conventional supervised learning AL scenarios, performs poorly in in-context learning.
|
[
"Margatina, Katerina",
"Schick, Timo",
"Aletras, Nikolaos",
"Dwivedi-Yu, Jane"
] |
Active Learning Principles for In-Context Learning with Large Language Models
|
findings-emnlp.334
|
2305.14264
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.335.bib
|
https://aclanthology.org/2023.findings-emnlp.335/
|
@inproceedings{liu-etal-2023-intemats,
title = "{I}nte{MAT}s: Integrating Granularity-Specific Multilingual Adapters for Cross-Lingual Transfer",
author = "Liu, Meizhen and
Guo, Xu and
Jiakai, He and
Chen, Jianye and
Zhou, Fengyu and
Hui, Siu",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.335",
doi = "10.18653/v1/2023.findings-emnlp.335",
pages = "5035--5049",
abstract = "Multilingual language models (MLLMs) have achieved remarkable success in various cross-lingual transfer tasks. However, they suffer poor performance in zero-shot low-resource languages, particularly when dealing with longer contexts. Existing research mainly relies on full-model fine-tuning on large parallel datasets to enhance the cross-lingual alignment of MLLMs, which is computationally expensive. In this paper, we propose InteMATs, a novel approach that integrates multilingual adapters trained on texts of different levels of granularity. To achieve this, we curate a multilingual parallel dataset comprising 42 languages to pre-train sentence-level and document-level adapters under the contrastive learning framework. Extensive experiments demonstrate the effectiveness of InteMATs in improving the cross-lingual transfer performance of MLLMs, especially on low-resource languages. Finally, our comprehensive analyses and ablation studies provide a deep understanding of the high-quality representations derived by InteMATs.",
}
|
Multilingual language models (MLLMs) have achieved remarkable success in various cross-lingual transfer tasks. However, they suffer poor performance in zero-shot low-resource languages, particularly when dealing with longer contexts. Existing research mainly relies on full-model fine-tuning on large parallel datasets to enhance the cross-lingual alignment of MLLMs, which is computationally expensive. In this paper, we propose InteMATs, a novel approach that integrates multilingual adapters trained on texts of different levels of granularity. To achieve this, we curate a multilingual parallel dataset comprising 42 languages to pre-train sentence-level and document-level adapters under the contrastive learning framework. Extensive experiments demonstrate the effectiveness of InteMATs in improving the cross-lingual transfer performance of MLLMs, especially on low-resource languages. Finally, our comprehensive analyses and ablation studies provide a deep understanding of the high-quality representations derived by InteMATs.
|
[
"Liu, Meizhen",
"Guo, Xu",
"Jiakai, He",
"Chen, Jianye",
"Zhou, Fengyu",
"Hui, Siu"
] |
InteMATs: Integrating Granularity-Specific Multilingual Adapters for Cross-Lingual Transfer
|
findings-emnlp.335
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.336.bib
|
https://aclanthology.org/2023.findings-emnlp.336/
|
@inproceedings{dou-etal-2023-plugmed,
title = "{P}lug{M}ed: Improving Specificity in Patient-Centered Medical Dialogue Generation using In-Context Learning",
author = "Dou, Chengfeng and
Jin, Zhi and
Jiao, Wenpin and
Zhao, Haiyan and
Zhao, Yongqiang and
Tao, Zhengwei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.336",
doi = "10.18653/v1/2023.findings-emnlp.336",
pages = "5050--5066",
abstract = "The patient-centered medical dialogue systems strive to offer diagnostic interpretation services to users who are less knowledgeable about medical knowledge, through emphasizing the importance of providing responses specific to the patients. It is difficult for the large language models (LLMs) to guarantee the specificity of responses in spite of its promising performance even in some tasks in medical field. Inspired by in-context learning, we propose PlugMed, a Plug-and-Play Medical Dialogue System, for addressing this challenge. PlugMed is equipped with two modules, the prompt generation (PG) module and the response ranking (RR) module, to enhances LLMs{'} dialogue strategies for improving the specificity of the dialogue. The PG module is designed to stimulate the imitative ability of LLMs by providing them with real dialogues from similar patients as prompts. The RR module incorporates fine-tuned small model as response filter to enable the selection of appropriate responses generated by LLMs. Furthermore, we introduce a new evaluation method based on matching both user{'}s intent and high-frequency medical term to effectively assess the specificity of the responses. We conduct experimental evaluations on three medical dialogue datasets, and the results, including both automatic and human evaluation, demonstrate the effectiveness of our approach.",
}
|
The patient-centered medical dialogue systems strive to offer diagnostic interpretation services to users who are less knowledgeable about medical knowledge, through emphasizing the importance of providing responses specific to the patients. It is difficult for the large language models (LLMs) to guarantee the specificity of responses in spite of its promising performance even in some tasks in medical field. Inspired by in-context learning, we propose PlugMed, a Plug-and-Play Medical Dialogue System, for addressing this challenge. PlugMed is equipped with two modules, the prompt generation (PG) module and the response ranking (RR) module, to enhances LLMs{'} dialogue strategies for improving the specificity of the dialogue. The PG module is designed to stimulate the imitative ability of LLMs by providing them with real dialogues from similar patients as prompts. The RR module incorporates fine-tuned small model as response filter to enable the selection of appropriate responses generated by LLMs. Furthermore, we introduce a new evaluation method based on matching both user{'}s intent and high-frequency medical term to effectively assess the specificity of the responses. We conduct experimental evaluations on three medical dialogue datasets, and the results, including both automatic and human evaluation, demonstrate the effectiveness of our approach.
|
[
"Dou, Chengfeng",
"Jin, Zhi",
"Jiao, Wenpin",
"Zhao, Haiyan",
"Zhao, Yongqiang",
"Tao, Zhengwei"
] |
PlugMed: Improving Specificity in Patient-Centered Medical Dialogue Generation using In-Context Learning
|
findings-emnlp.336
|
2305.11508
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.337.bib
|
https://aclanthology.org/2023.findings-emnlp.337/
|
@inproceedings{yan-etal-2023-codetransocean,
title = "{C}ode{T}rans{O}cean: A Comprehensive Multilingual Benchmark for Code Translation",
author = "Yan, Weixiang and
Tian, Yuchen and
Li, Yunzhe and
Chen, Qian and
Wang, Wen",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.337",
doi = "10.18653/v1/2023.findings-emnlp.337",
pages = "5067--5089",
abstract = "Recent code translation techniques exploit neural machine translation models to translate source code from one programming language to another to satisfy production compatibility or to improve efficiency of codebase maintenance. Most existing code translation datasets only focus on a single pair of popular programming languages. To advance research on code translation and meet diverse requirements of real-world applications, we construct **CodeTransOcean**, a large-scale comprehensive benchmark that supports the largest variety of programming languages for code translation. CodeTransOcean consists of three novel multilingual datasets, namely, **MultilingualTrans** supporting translations between multiple popular programming languages, **NicheTrans** for translating between niche programming languages and popular ones, and **LLMTrans** for evaluating executability of translated code by large language models (LLMs). CodeTransOcean also includes a novel cross-framework dataset, **DLTrans**, for translating deep learning code across different frameworks. We develop multilingual modeling approaches for code translation and demonstrate their great potential in improving the translation quality of both low-resource and high-resource language pairs and boosting the training efficiency. We also propose a novel evaluation metric **Debugging Success Rate@K** for program-level code translation. Last but not least, we evaluate LLM ChatGPT on our datasets and investigate its potential for fuzzy execution predictions. We build baselines for CodeTransOcean and analyze challenges of code translation for guiding future research. The CodeTransOcean datasets and code are publicly available at https://github.com/WeixiangYAN/CodeTransOcean.",
}
|
Recent code translation techniques exploit neural machine translation models to translate source code from one programming language to another to satisfy production compatibility or to improve efficiency of codebase maintenance. Most existing code translation datasets only focus on a single pair of popular programming languages. To advance research on code translation and meet diverse requirements of real-world applications, we construct **CodeTransOcean**, a large-scale comprehensive benchmark that supports the largest variety of programming languages for code translation. CodeTransOcean consists of three novel multilingual datasets, namely, **MultilingualTrans** supporting translations between multiple popular programming languages, **NicheTrans** for translating between niche programming languages and popular ones, and **LLMTrans** for evaluating executability of translated code by large language models (LLMs). CodeTransOcean also includes a novel cross-framework dataset, **DLTrans**, for translating deep learning code across different frameworks. We develop multilingual modeling approaches for code translation and demonstrate their great potential in improving the translation quality of both low-resource and high-resource language pairs and boosting the training efficiency. We also propose a novel evaluation metric **Debugging Success Rate@K** for program-level code translation. Last but not least, we evaluate LLM ChatGPT on our datasets and investigate its potential for fuzzy execution predictions. We build baselines for CodeTransOcean and analyze challenges of code translation for guiding future research. The CodeTransOcean datasets and code are publicly available at https://github.com/WeixiangYAN/CodeTransOcean.
|
[
"Yan, Weixiang",
"Tian, Yuchen",
"Li, Yunzhe",
"Chen, Qian",
"Wang, Wen"
] |
CodeTransOcean: A Comprehensive Multilingual Benchmark for Code Translation
|
findings-emnlp.337
|
2310.04951
|
[
"https://github.com/weixiangyan/codetransocean"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.338.bib
|
https://aclanthology.org/2023.findings-emnlp.338/
|
@inproceedings{bolucu-etal-2023-impact,
title = "impact of sample selection on in-context learning for entity extraction from scientific writing",
author = {B{\"o}l{\"u}c{\"u}, Necva and
Rybinski, Maciej and
Wan, Stephen},
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.338",
doi = "10.18653/v1/2023.findings-emnlp.338",
pages = "5090--5107",
abstract = "Prompt-based usage of Large Language Models (LLMs) is an increasingly popular way to tackle many well-known natural language problems. This trend is due, in part, to the appeal of the In-Context Learning (ICL) prompt set-up, in which a few selected training examples are provided along with the inference request. ICL, a type of few-shot learning, is especially attractive for natural language processing (NLP) tasks defined for specialised domains, such as entity extraction from scientific documents, where the annotation is very costly due to expertise requirements for the annotators. In this paper, we present a comprehensive analysis of in-context sample selection methods for entity extraction from scientific documents using GPT-3.5 and compare these results against a fully supervised transformer-based baseline. Our results indicate that the effectiveness of the in-context sample selection methods is heavily domain-dependent, but the improvements are more notable for problems with a larger number of entity types. More in-depth analysis shows that ICL is more effective for low-resource set-ups of scientific information extraction",
}
|
Prompt-based usage of Large Language Models (LLMs) is an increasingly popular way to tackle many well-known natural language problems. This trend is due, in part, to the appeal of the In-Context Learning (ICL) prompt set-up, in which a few selected training examples are provided along with the inference request. ICL, a type of few-shot learning, is especially attractive for natural language processing (NLP) tasks defined for specialised domains, such as entity extraction from scientific documents, where the annotation is very costly due to expertise requirements for the annotators. In this paper, we present a comprehensive analysis of in-context sample selection methods for entity extraction from scientific documents using GPT-3.5 and compare these results against a fully supervised transformer-based baseline. Our results indicate that the effectiveness of the in-context sample selection methods is heavily domain-dependent, but the improvements are more notable for problems with a larger number of entity types. More in-depth analysis shows that ICL is more effective for low-resource set-ups of scientific information extraction
|
[
"B{\\\"o}l{\\\"u}c{\\\"u}, Necva",
"Rybinski, Maciej",
"Wan, Stephen"
] |
impact of sample selection on in-context learning for entity extraction from scientific writing
|
findings-emnlp.338
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.339.bib
|
https://aclanthology.org/2023.findings-emnlp.339/
|
@inproceedings{pozzobon-etal-2023-goodtriever,
title = "Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented Models",
author = "Pozzobon, Luiza and
Ermis, Beyza and
Lewis, Patrick and
Hooker, Sara",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.339",
doi = "10.18653/v1/2023.findings-emnlp.339",
pages = "5108--5125",
abstract = "Considerable effort has been dedicated to mitigating toxicity, but existing methods often require drastic modifications to model parameters or the use of computationally intensive auxiliary models. Furthermore, previous approaches have often neglected the crucial factor of language{'}s evolving nature over time. In this work, we present a comprehensive perspective on toxicity mitigation that takes into account its changing nature. We introduce Goodtriever, a flexible methodology that matches the current state-of-the-art toxicity mitigation while achieving 43{\%} relative latency reduction during inference and being more computationally efficient. By incorporating a retrieval-based approach at decoding time, Goodtriever enables toxicity-controlled text generation. Our research advocates for an increased focus on adaptable mitigation techniques, which better reflect the data drift models face when deployed in the wild.",
}
|
Considerable effort has been dedicated to mitigating toxicity, but existing methods often require drastic modifications to model parameters or the use of computationally intensive auxiliary models. Furthermore, previous approaches have often neglected the crucial factor of language{'}s evolving nature over time. In this work, we present a comprehensive perspective on toxicity mitigation that takes into account its changing nature. We introduce Goodtriever, a flexible methodology that matches the current state-of-the-art toxicity mitigation while achieving 43{\%} relative latency reduction during inference and being more computationally efficient. By incorporating a retrieval-based approach at decoding time, Goodtriever enables toxicity-controlled text generation. Our research advocates for an increased focus on adaptable mitigation techniques, which better reflect the data drift models face when deployed in the wild.
|
[
"Pozzobon, Luiza",
"Ermis, Beyza",
"Lewis, Patrick",
"Hooker, Sara"
] |
Goodtriever: Adaptive Toxicity Mitigation with Retrieval-augmented Models
|
findings-emnlp.339
|
2310.07589
|
[
"https://github.com/for-ai/goodtriever"
] |
https://huggingface.co/papers/2310.07589
| 2 | 0 | 0 | 4 |
[] |
[] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.340.bib
|
https://aclanthology.org/2023.findings-emnlp.340/
|
@inproceedings{huang-baldwin-2023-robustness,
title = "Robustness Tests for Automatic Machine Translation Metrics with Adversarial Attacks",
author = "Huang, Yichen and
Baldwin, Timothy",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.340",
doi = "10.18653/v1/2023.findings-emnlp.340",
pages = "5126--5135",
abstract = "We investigate MT evaluation metric performance on adversarially-synthesized texts, to shed light on metric robustness. We experiment with word- and character-level attacks on three popular machine translation metrics: BERTScore, BLEURT, and COMET. Our human experiments validate that automatic metrics tend to overpenalize adversarially-degraded translations. We also identify inconsistencies in BERTScore ratings, where it judges the original sentence and the adversarially-degraded one as similar, while judging the degraded translation as notably worse than the original with respect to the reference. We identify patterns of brittleness that motivate more robust metric development.",
}
|
We investigate MT evaluation metric performance on adversarially-synthesized texts, to shed light on metric robustness. We experiment with word- and character-level attacks on three popular machine translation metrics: BERTScore, BLEURT, and COMET. Our human experiments validate that automatic metrics tend to overpenalize adversarially-degraded translations. We also identify inconsistencies in BERTScore ratings, where it judges the original sentence and the adversarially-degraded one as similar, while judging the degraded translation as notably worse than the original with respect to the reference. We identify patterns of brittleness that motivate more robust metric development.
|
[
"Huang, Yichen",
"Baldwin, Timothy"
] |
Robustness Tests for Automatic Machine Translation Metrics with Adversarial Attacks
|
findings-emnlp.340
|
2311.00508
|
[
"https://github.com/i-need-sleep/eval_attack"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.341.bib
|
https://aclanthology.org/2023.findings-emnlp.341/
|
@inproceedings{tsunomori-etal-2023-time,
title = "Time-Considerable Dialogue Models via Reranking by Time Dependency",
author = "Tsunomori, Yuiko and
Ishihata, Masakazu and
Sugiyama, Hiroaki",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.341",
doi = "10.18653/v1/2023.findings-emnlp.341",
pages = "5136--5149",
abstract = "In the last few years, generative dialogue models have shown excellent performance and have been used for various applications. As chatbots become more prevalent in our daily lives, more and more people expect them to behave more like humans, but existing dialogue models do not consider the time information that people are constantly aware of. In this paper, we aim to construct a time-considerable dialogue model that actively utilizes time information. First, we categorize responses by their naturalness at different times and introduce a new metric to classify responses into our categories. Then, we propose a new reranking method to make the existing dialogue model time-considerable using the proposed metric and subjectively evaluate the performances of the obtained time-considerable dialogue models by humans.",
}
|
In the last few years, generative dialogue models have shown excellent performance and have been used for various applications. As chatbots become more prevalent in our daily lives, more and more people expect them to behave more like humans, but existing dialogue models do not consider the time information that people are constantly aware of. In this paper, we aim to construct a time-considerable dialogue model that actively utilizes time information. First, we categorize responses by their naturalness at different times and introduce a new metric to classify responses into our categories. Then, we propose a new reranking method to make the existing dialogue model time-considerable using the proposed metric and subjectively evaluate the performances of the obtained time-considerable dialogue models by humans.
|
[
"Tsunomori, Yuiko",
"Ishihata, Masakazu",
"Sugiyama, Hiroaki"
] |
Time-Considerable Dialogue Models via Reranking by Time Dependency
|
findings-emnlp.341
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.342.bib
|
https://aclanthology.org/2023.findings-emnlp.342/
|
@inproceedings{dankers-lucas-2023-non,
title = "Non-Compositionality in Sentiment: New Data and Analyses",
author = "Dankers, Verna and
Lucas, Christopher",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.342",
doi = "10.18653/v1/2023.findings-emnlp.342",
pages = "5150--5162",
abstract = "When natural language phrases are combined, their meaning is often more than the sum of their parts. In the context of NLP tasks such as sentiment analysis, where the meaning of a phrase is its sentiment, that still applies. Many NLP studies on sentiment analysis, however, focus on the fact that sentiment computations are largely compositional. We, instead, set out to obtain non-compositionality ratings for phrases with respect to their sentiment. Our contributions are as follows: a) a methodology for obtaining those non-compositionality ratings, b) a resource of ratings for 259 phrases {--} NonCompSST {--} along with an analysis of that resource, and c) an evaluation of computational models for sentiment analysis using this new resource.",
}
|
When natural language phrases are combined, their meaning is often more than the sum of their parts. In the context of NLP tasks such as sentiment analysis, where the meaning of a phrase is its sentiment, that still applies. Many NLP studies on sentiment analysis, however, focus on the fact that sentiment computations are largely compositional. We, instead, set out to obtain non-compositionality ratings for phrases with respect to their sentiment. Our contributions are as follows: a) a methodology for obtaining those non-compositionality ratings, b) a resource of ratings for 259 phrases {--} NonCompSST {--} along with an analysis of that resource, and c) an evaluation of computational models for sentiment analysis using this new resource.
|
[
"Dankers, Verna",
"Lucas, Christopher"
] |
Non-Compositionality in Sentiment: New Data and Analyses
|
findings-emnlp.342
|
2310.20656
|
[
"https://github.com/vernadankers/noncompsst"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.343.bib
|
https://aclanthology.org/2023.findings-emnlp.343/
|
@inproceedings{chen-etal-2023-mprompt,
title = "{MP}rompt: Exploring Multi-level Prompt Tuning for Machine Reading Comprehension",
author = "Chen, Guoxin and
Qian, Yiming and
Wang, Bowen and
Li, Liangzhi",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.343",
doi = "10.18653/v1/2023.findings-emnlp.343",
pages = "5163--5175",
abstract = "The large language models have achieved superior performance on various natural language tasks. One major drawback of such approaches is they are resource-intensive in fine-tuning new datasets. Soft-prompt tuning presents a resource-efficient solution to fine-tune the pre-trained language models (PLMs) while keeping their weight frozen. Existing soft prompt methods mainly focus on designing the input-independent prompts that steer the model to fit the domain of the new dataset. Those methods often ignore the fine-grained information about the task and context of the text. In this paper, we propose a multi-level prompt tuning (MPrompt) method for machine reading comprehension. It utilizes prompts at task-specific, domain-specific, and context-specific levels to enhance the comprehension of input semantics at different granularities. We also propose an independence constraint to steer each domain-specific prompt to focus on information within its domain to avoid redundancy. Moreover, we present a prompt generator that incorporates context-related knowledge in the prompt generation to enhance contextual relevancy. We conducted extensive experiments on 12 benchmarks of various QA formats and achieved an average improvement of 1.94{\%} over the state-of-the-art methods.",
}
|
The large language models have achieved superior performance on various natural language tasks. One major drawback of such approaches is they are resource-intensive in fine-tuning new datasets. Soft-prompt tuning presents a resource-efficient solution to fine-tune the pre-trained language models (PLMs) while keeping their weight frozen. Existing soft prompt methods mainly focus on designing the input-independent prompts that steer the model to fit the domain of the new dataset. Those methods often ignore the fine-grained information about the task and context of the text. In this paper, we propose a multi-level prompt tuning (MPrompt) method for machine reading comprehension. It utilizes prompts at task-specific, domain-specific, and context-specific levels to enhance the comprehension of input semantics at different granularities. We also propose an independence constraint to steer each domain-specific prompt to focus on information within its domain to avoid redundancy. Moreover, we present a prompt generator that incorporates context-related knowledge in the prompt generation to enhance contextual relevancy. We conducted extensive experiments on 12 benchmarks of various QA formats and achieved an average improvement of 1.94{\%} over the state-of-the-art methods.
|
[
"Chen, Guoxin",
"Qian, Yiming",
"Wang, Bowen",
"Li, Liangzhi"
] |
MPrompt: Exploring Multi-level Prompt Tuning for Machine Reading Comprehension
|
findings-emnlp.343
|
2310.18167
|
[
"https://github.com/chen-gx/mprompt"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.344.bib
|
https://aclanthology.org/2023.findings-emnlp.344/
|
@inproceedings{wang-etal-2023-doctrack,
title = "{D}oc{T}rack: A Visually-Rich Document Dataset Really Aligned with Human Eye Movement for Machine Reading",
author = "Wang, Hao and
Wang, Qingxuan and
Li, Yue and
Wang, Changqing and
Chu, Chenhui and
Wang, Rui",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.344",
doi = "10.18653/v1/2023.findings-emnlp.344",
pages = "5176--5189",
abstract = "The use of visually-rich documents in various fields has created a demand for Document AI models that can read and comprehend documents like humans, which requires the overcoming of technical, linguistic, and cognitive barriers. Unfortunately, the lack of appropriate datasets has significantly hindered advancements in the field. To address this issue, we introduce DocTrack, a visually-rich document dataset really aligned with human eye-movement information using eye-tracking technology. This dataset can be used to investigate the challenges mentioned above. Additionally, we explore the impact of human reading order on document understanding tasks and examine what would happen if a machine reads in the same order as a human. Our results suggest that although Document AI models have made significant progresses, they still have a long way to go before they can read visually richer documents as accurately, continuously, and flexibly as humans do. These findings have potential implications for future research and development of document intelligence.",
}
|
The use of visually-rich documents in various fields has created a demand for Document AI models that can read and comprehend documents like humans, which requires the overcoming of technical, linguistic, and cognitive barriers. Unfortunately, the lack of appropriate datasets has significantly hindered advancements in the field. To address this issue, we introduce DocTrack, a visually-rich document dataset really aligned with human eye-movement information using eye-tracking technology. This dataset can be used to investigate the challenges mentioned above. Additionally, we explore the impact of human reading order on document understanding tasks and examine what would happen if a machine reads in the same order as a human. Our results suggest that although Document AI models have made significant progresses, they still have a long way to go before they can read visually richer documents as accurately, continuously, and flexibly as humans do. These findings have potential implications for future research and development of document intelligence.
|
[
"Wang, Hao",
"Wang, Qingxuan",
"Li, Yue",
"Wang, Changqing",
"Chu, Chenhui",
"Wang, Rui"
] |
DocTrack: A Visually-Rich Document Dataset Really Aligned with Human Eye Movement for Machine Reading
|
findings-emnlp.344
|
2310.14802
|
[
"https://github.com/hint-lab/doctrack"
] |
https://huggingface.co/papers/2310.14802
| 0 | 0 | 0 | 6 |
[] |
[] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.345.bib
|
https://aclanthology.org/2023.findings-emnlp.345/
|
@inproceedings{chen-etal-2023-adaptation,
title = "Adaptation with Self-Evaluation to Improve Selective Prediction in {LLM}s",
author = "Chen, Jiefeng and
Yoon, Jinsung and
Ebrahimi, Sayna and
Arik, Sercan and
Pfister, Tomas and
Jha, Somesh",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.345",
doi = "10.18653/v1/2023.findings-emnlp.345",
pages = "5190--5213",
abstract = "Large language models (LLMs) have recently shown great advances in a variety of tasks, including natural language understanding and generation. However, their use in high-stakes decision-making scenarios is still limited due to the potential for errors. *Selective prediction* is a technique that can be used to improve the reliability of the LLMs by allowing them to abstain from making predictions when they are unsure of the answer. In this work, we propose a novel framework for adaptation with self-evaluation to improve the selective prediction performance of LLMs. Our framework is based on the idea of using parameter-efficient tuning to adapt the LLM to the specific task at hand while improving its ability to perform self-evaluation. We evaluate our method on a variety of question-answering (QA) datasets and show that it outperforms state-of-the-art selective prediction methods. For example, on the CoQA benchmark, our method improves the AUACC from 91.23{\%} to 92.63{\%} and improves the AUROC from 74.61{\%} to 80.25{\%}.",
}
|
Large language models (LLMs) have recently shown great advances in a variety of tasks, including natural language understanding and generation. However, their use in high-stakes decision-making scenarios is still limited due to the potential for errors. *Selective prediction* is a technique that can be used to improve the reliability of the LLMs by allowing them to abstain from making predictions when they are unsure of the answer. In this work, we propose a novel framework for adaptation with self-evaluation to improve the selective prediction performance of LLMs. Our framework is based on the idea of using parameter-efficient tuning to adapt the LLM to the specific task at hand while improving its ability to perform self-evaluation. We evaluate our method on a variety of question-answering (QA) datasets and show that it outperforms state-of-the-art selective prediction methods. For example, on the CoQA benchmark, our method improves the AUACC from 91.23{\%} to 92.63{\%} and improves the AUROC from 74.61{\%} to 80.25{\%}.
|
[
"Chen, Jiefeng",
"Yoon, Jinsung",
"Ebrahimi, Sayna",
"Arik, Sercan",
"Pfister, Tomas",
"Jha, Somesh"
] |
Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs
|
findings-emnlp.345
|
2310.11689
|
[
""
] |
https://huggingface.co/papers/2310.11689
| 1 | 1 | 0 | 6 |
[] |
[] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.346.bib
|
https://aclanthology.org/2023.findings-emnlp.346/
|
@inproceedings{tong-etal-2023-bi,
title = "Bi-Drop: Enhancing Fine-tuning Generalization via Synchronous sub-net Estimation and Optimization",
author = "Tong, Shoujie and
Xia, Heming and
Dai, Damai and
Xu, Runxin and
Liu, Tianyu and
Lin, Binghuai and
Cao, Yunbo and
Sui, Zhifang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.346",
doi = "10.18653/v1/2023.findings-emnlp.346",
pages = "5214--5227",
abstract = "Pretrained language models have achieved remarkable success in natural language understanding. However, fine-tuning pretrained models on limited training data tends to overfit and thus diminish performance. This paper presents Bi-Drop, a fine-tuning strategy that selectively updates model parameters using gradients from various sub-nets dynamically generated by dropout. The sub-net estimation of Bi-Drop is performed in an in-batch manner, so it overcomes the problem of hysteresis in sub-net updating, which is possessed by previous methods that perform asynchronous sub-net estimation. Also, Bi-Drop needs only one mini-batch to estimate the sub-net so it achieves higher utility of training data. Experiments on the GLUE benchmark demonstrate that Bi-Drop consistently outperforms previous fine-tuning methods. Furthermore, empirical results also show that Bi-Drop exhibits excellent generalization ability and robustness for domain transfer, data imbalance, and low-resource scenarios.",
}
|
Pretrained language models have achieved remarkable success in natural language understanding. However, fine-tuning pretrained models on limited training data tends to overfit and thus diminish performance. This paper presents Bi-Drop, a fine-tuning strategy that selectively updates model parameters using gradients from various sub-nets dynamically generated by dropout. The sub-net estimation of Bi-Drop is performed in an in-batch manner, so it overcomes the problem of hysteresis in sub-net updating, which is possessed by previous methods that perform asynchronous sub-net estimation. Also, Bi-Drop needs only one mini-batch to estimate the sub-net so it achieves higher utility of training data. Experiments on the GLUE benchmark demonstrate that Bi-Drop consistently outperforms previous fine-tuning methods. Furthermore, empirical results also show that Bi-Drop exhibits excellent generalization ability and robustness for domain transfer, data imbalance, and low-resource scenarios.
|
[
"Tong, Shoujie",
"Xia, Heming",
"Dai, Damai",
"Xu, Runxin",
"Liu, Tianyu",
"Lin, Binghuai",
"Cao, Yunbo",
"Sui, Zhifang"
] |
Bi-Drop: Enhancing Fine-tuning Generalization via Synchronous sub-net Estimation and Optimization
|
findings-emnlp.346
|
2305.14760
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.347.bib
|
https://aclanthology.org/2023.findings-emnlp.347/
|
@inproceedings{zhang-etal-2023-clozex,
title = "{C}loz{E}x: A Task toward Generation of {E}nglish Cloze Explanation",
author = "Zhang, Zizheng and
Mita, Masato and
Komachi, Mamoru",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.347",
doi = "10.18653/v1/2023.findings-emnlp.347",
pages = "5228--5242",
abstract = "Providing explanations for cloze questions in language assessment (LA) has been recognized as a valuable approach to enhancing the language proficiency of learners. However, there is a noticeable absence of dedicated tasks and datasets specifically designed for generating language learner explanations. In response to this gap, this paper introduces a novel task ClozEx of generating explanations for cloze questions in LA, with a particular focus on English as a Second Language (ESL) learners. To support this task, we present a meticulously curated dataset comprising cloze questions paired with corresponding explanations. This dataset aims to assess language proficiency and facilitates language learning by offering informative and accurate explanations. To tackle the task, we fine-tuned various baseline models with our training data, including encoder-decoder and decoder-only architectures. We also explored whether large language models (LLMs) are able to generate good explanations without fine-tuning, just using pre-defined prompts. The evaluation results demonstrate that encoder-decoder models have the potential to deliver fluent and valid explanations when trained on our dataset.",
}
|
Providing explanations for cloze questions in language assessment (LA) has been recognized as a valuable approach to enhancing the language proficiency of learners. However, there is a noticeable absence of dedicated tasks and datasets specifically designed for generating language learner explanations. In response to this gap, this paper introduces a novel task ClozEx of generating explanations for cloze questions in LA, with a particular focus on English as a Second Language (ESL) learners. To support this task, we present a meticulously curated dataset comprising cloze questions paired with corresponding explanations. This dataset aims to assess language proficiency and facilitates language learning by offering informative and accurate explanations. To tackle the task, we fine-tuned various baseline models with our training data, including encoder-decoder and decoder-only architectures. We also explored whether large language models (LLMs) are able to generate good explanations without fine-tuning, just using pre-defined prompts. The evaluation results demonstrate that encoder-decoder models have the potential to deliver fluent and valid explanations when trained on our dataset.
|
[
"Zhang, Zizheng",
"Mita, Masato",
"Komachi, Mamoru"
] |
ClozEx: A Task toward Generation of English Cloze Explanation
|
findings-emnlp.347
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.348.bib
|
https://aclanthology.org/2023.findings-emnlp.348/
|
@inproceedings{levy-etal-2023-probing,
title = "Is Probing All You Need? Indicator Tasks as an Alternative to Probing Embedding Spaces",
author = "Levy, Tal and
Goldman, Omer and
Tsarfaty, Reut",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.348",
doi = "10.18653/v1/2023.findings-emnlp.348",
pages = "5243--5254",
abstract = "The ability to identify and control different kinds of linguistic information encoded in vector representations of words has many use cases, especially for explainability and bias removal. This is usually done via a set of simple classification tasks, termed \textit{probes}, to evaluate the information encoded in the embedding space. However, the involvement of a trainable classifier leads to entanglement between the probe{'}s results and the classifier{'}s nature. As a result, contemporary works on probing include tasks that do not involve training of auxiliary models. In this work we introduce the term \textit{indicator tasks} for non-trainable tasks which are used to query embedding spaces for the existence of certain properties, and claim that this kind of tasks may point to a direction opposite to probes, and that this contradiction complicates the decision on whether a property exists in an embedding space. We demonstrate our claims with two test cases, one dealing with gender debiasing and another with the erasure of morphological information from embedding spaces. We show that the application of a suitable indicator provides a more accurate picture of the information captured and removed compared to probes. We thus conclude that indicator tasks should be implemented and taken into consideration when eliciting information from embedded representations.",
}
|
The ability to identify and control different kinds of linguistic information encoded in vector representations of words has many use cases, especially for explainability and bias removal. This is usually done via a set of simple classification tasks, termed \textit{probes}, to evaluate the information encoded in the embedding space. However, the involvement of a trainable classifier leads to entanglement between the probe{'}s results and the classifier{'}s nature. As a result, contemporary works on probing include tasks that do not involve training of auxiliary models. In this work we introduce the term \textit{indicator tasks} for non-trainable tasks which are used to query embedding spaces for the existence of certain properties, and claim that this kind of tasks may point to a direction opposite to probes, and that this contradiction complicates the decision on whether a property exists in an embedding space. We demonstrate our claims with two test cases, one dealing with gender debiasing and another with the erasure of morphological information from embedding spaces. We show that the application of a suitable indicator provides a more accurate picture of the information captured and removed compared to probes. We thus conclude that indicator tasks should be implemented and taken into consideration when eliciting information from embedded representations.
|
[
"Levy, Tal",
"Goldman, Omer",
"Tsarfaty, Reut"
] |
Is Probing All You Need? Indicator Tasks as an Alternative to Probing Embedding Spaces
|
findings-emnlp.348
|
2310.15905
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.349.bib
|
https://aclanthology.org/2023.findings-emnlp.349/
|
@inproceedings{namburi-etal-2023-cost,
title = "The Cost of Compression: Investigating the Impact of Compression on Parametric Knowledge in Language Models",
author = "Namburi, Satya Sai Srinath and
Sreedhar, Makesh and
Srinivasan, Srinath and
Sala, Frederic",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.349",
doi = "10.18653/v1/2023.findings-emnlp.349",
pages = "5255--5273",
abstract = "Compressing large language models (LLMs), often consisting of billions of parameters, provides faster inference, smaller memory footprints, and enables local deployment. The standard compression techniques are pruning and quantization, with the former eliminating redundant connections in model layers and the latter representing model parameters with as little as 4 bits. The key tradeoff is between the degree of compression and the impact on the quality of the compressed model. Existing research on LLM compression primarily focuses on performance in terms of general metrics like perplexity or downstream task accuracy. More fine-grained metrics, such as those measuring parametric knowledge, remain significantly underexplored. To help bridge this gap, we present a comprehensive analysis across multiple model families using the LAMA and LM-Harness benchmarks in order to systematically quantify the effect of commonly employed compression techniques on model performance. A particular focus is on tradeoffs involving parametric knowledge, with the goal of providing practitioners with practical insights to make informed decisions on compression.",
}
|
Compressing large language models (LLMs), often consisting of billions of parameters, provides faster inference, smaller memory footprints, and enables local deployment. The standard compression techniques are pruning and quantization, with the former eliminating redundant connections in model layers and the latter representing model parameters with as little as 4 bits. The key tradeoff is between the degree of compression and the impact on the quality of the compressed model. Existing research on LLM compression primarily focuses on performance in terms of general metrics like perplexity or downstream task accuracy. More fine-grained metrics, such as those measuring parametric knowledge, remain significantly underexplored. To help bridge this gap, we present a comprehensive analysis across multiple model families using the LAMA and LM-Harness benchmarks in order to systematically quantify the effect of commonly employed compression techniques on model performance. A particular focus is on tradeoffs involving parametric knowledge, with the goal of providing practitioners with practical insights to make informed decisions on compression.
|
[
"Namburi, Satya Sai Srinath",
"Sreedhar, Makesh",
"Srinivasan, Srinath",
"Sala, Frederic"
] |
The Cost of Compression: Investigating the Impact of Compression on Parametric Knowledge in Language Models
|
findings-emnlp.349
|
2312.00960
|
[
"https://github.com/namburisrinath/llmcompression"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.350.bib
|
https://aclanthology.org/2023.findings-emnlp.350/
|
@inproceedings{raheja-etal-2023-coedit,
title = "{C}o{E}d{IT}: Text Editing by Task-Specific Instruction Tuning",
author = "Raheja, Vipul and
Kumar, Dhruv and
Koo, Ryan and
Kang, Dongyeop",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.350",
doi = "10.18653/v1/2023.findings-emnlp.350",
pages = "5274--5291",
abstract = "We introduce CoEdIT, a state-of-the-art text editing system for writing assistance. CoEdIT takes instructions from the user specifying the attributes of the desired text, such as {``}Make the sentence simpler{''} or {``}Write it in a more neutral style,{''} and outputs the edited text. We present a large language model fine-tuned on a diverse collection of task-specific instructions for text editing (a total of 82K instructions). Our model (1) achieves state-of-the-art performance on various text editing benchmarks, (2) is competitive with publicly available largest-sized LLMs trained on instructions while being {\textasciitilde}60x smaller, (3) is capable of generalizing to unseen edit instructions, and (4) exhibits abilities to generalize to composite instructions containing different combinations of edit actions. Through extensive qualitative and quantitative analysis, we show that writers prefer the edits suggested by CoEdIT relative to other state-of-the-art text editing models. Our code, data, and models are publicly available at https://github.com/vipulraheja/coedit.",
}
|
We introduce CoEdIT, a state-of-the-art text editing system for writing assistance. CoEdIT takes instructions from the user specifying the attributes of the desired text, such as {``}Make the sentence simpler{''} or {``}Write it in a more neutral style,{''} and outputs the edited text. We present a large language model fine-tuned on a diverse collection of task-specific instructions for text editing (a total of 82K instructions). Our model (1) achieves state-of-the-art performance on various text editing benchmarks, (2) is competitive with publicly available largest-sized LLMs trained on instructions while being {\textasciitilde}60x smaller, (3) is capable of generalizing to unseen edit instructions, and (4) exhibits abilities to generalize to composite instructions containing different combinations of edit actions. Through extensive qualitative and quantitative analysis, we show that writers prefer the edits suggested by CoEdIT relative to other state-of-the-art text editing models. Our code, data, and models are publicly available at https://github.com/vipulraheja/coedit.
|
[
"Raheja, Vipul",
"Kumar, Dhruv",
"Koo, Ryan",
"Kang, Dongyeop"
] |
CoEdIT: Text Editing by Task-Specific Instruction Tuning
|
findings-emnlp.350
|
2305.09857
|
[
"https://github.com/vipulraheja/coedit"
] |
https://huggingface.co/papers/2305.09857
| 3 | 6 | 3 | 4 |
[
"grammarly/coedit-large",
"grammarly/coedit-xxl",
"grammarly/coedit-xl-composite",
"grammarly/coedit-xl",
"HARSHU550/Grammer"
] |
[
"grammarly/coedit",
"BEE-spoke-data/coedit-reworded-deduped",
"chargoddard/coedit-reworded",
"nayohan/coedit-ko"
] |
[
"jbochi/Candle-CoEdIT-Wasm",
"NoaiGPT/free",
"ColeGuion/grammarly-coedit-large",
"th0mascat/test",
"anmolmore/mlapp",
"johanjs1/grammarly-coedit-large",
"ColeGuion/Grammarly_Space",
"vkthakur88/grammarly-coedit-large",
"NoaiGPT/grammarly-coedit-large",
"zahir-waylon/grammarly-coedit-xl",
"tarjomeh/grammarly-coedit-xl-composite",
"Sham786/CoEdit",
"Sham786/grammarly-coedit-xxl",
"tarjomeh/grammarly-coedit-xxl"
] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.351.bib
|
https://aclanthology.org/2023.findings-emnlp.351/
|
@inproceedings{dai-etal-2023-exploring,
title = "Exploring Large Language Models for Multi-Modal Out-of-Distribution Detection",
author = "Dai, Yi and
Lang, Hao and
Zeng, Kaisheng and
Huang, Fei and
Li, Yongbin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.351",
doi = "10.18653/v1/2023.findings-emnlp.351",
pages = "5292--5305",
abstract = "Out-of-distribution (OOD) detection is essential for reliable and trustworthy machine learning. Recent multi-modal OOD detection leverages textual information from in-distribution (ID) class names for visual OOD detection, yet it currently neglects the rich contextual information of ID classes. Large language models (LLMs) encode a wealth of world knowledge and can be prompted to generate descriptive features for each class. Indiscriminately using such knowledge causes catastrophic damage to OOD detection due to LLMs{'} hallucinations, as is observed by our analysis. In this paper, we propose to apply world knowledge to enhance OOD detection performance through selective generation from LLMs. Specifically, we introduce a consistency-based uncertainty calibration method to estimate the confidence score of each generation. We further extract visual objects from each image to fully capitalize on the aforementioned world knowledge. Extensive experiments demonstrate that our method consistently outperforms the state-of-the-art.",
}
|
Out-of-distribution (OOD) detection is essential for reliable and trustworthy machine learning. Recent multi-modal OOD detection leverages textual information from in-distribution (ID) class names for visual OOD detection, yet it currently neglects the rich contextual information of ID classes. Large language models (LLMs) encode a wealth of world knowledge and can be prompted to generate descriptive features for each class. Indiscriminately using such knowledge causes catastrophic damage to OOD detection due to LLMs{'} hallucinations, as is observed by our analysis. In this paper, we propose to apply world knowledge to enhance OOD detection performance through selective generation from LLMs. Specifically, we introduce a consistency-based uncertainty calibration method to estimate the confidence score of each generation. We further extract visual objects from each image to fully capitalize on the aforementioned world knowledge. Extensive experiments demonstrate that our method consistently outperforms the state-of-the-art.
|
[
"Dai, Yi",
"Lang, Hao",
"Zeng, Kaisheng",
"Huang, Fei",
"Li, Yongbin"
] |
Exploring Large Language Models for Multi-Modal Out-of-Distribution Detection
|
findings-emnlp.351
|
2310.08027
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.352.bib
|
https://aclanthology.org/2023.findings-emnlp.352/
|
@inproceedings{chepurova-etal-2023-better,
title = "Better Together: Enhancing Generative Knowledge Graph Completion with Language Models and Neighborhood Information",
author = "Chepurova, Alla and
Bulatov, Aydar and
Kuratov, Yuri and
Burtsev, Mikhail",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.352",
doi = "10.18653/v1/2023.findings-emnlp.352",
pages = "5306--5316",
abstract = "Real-world Knowledge Graphs (KGs) often suffer from incompleteness, which limits their potential performance. Knowledge Graph Completion (KGC) techniques aim to address this issue. However, traditional KGC methods are computationally intensive and impractical for large-scale KGs, necessitating the learning of dense node embeddings and computing pairwise distances. Generative transformer-based language models (e.g., T5 and recent KGT5) offer a promising solution as they can predict the tail nodes directly. In this study, we propose to include node neighborhoods as additional information to improve KGC methods based on language models. We examine the effects of this imputation and show that, on both inductive and transductive Wikidata subsets, our method outperforms KGT5 and conventional KGC approaches. We also provide an extensive analysis of the impact of neighborhood on model prediction and show its importance. Furthermore, we point the way to significantly improve KGC through more effective neighborhood selection.",
}
|
Real-world Knowledge Graphs (KGs) often suffer from incompleteness, which limits their potential performance. Knowledge Graph Completion (KGC) techniques aim to address this issue. However, traditional KGC methods are computationally intensive and impractical for large-scale KGs, necessitating the learning of dense node embeddings and computing pairwise distances. Generative transformer-based language models (e.g., T5 and recent KGT5) offer a promising solution as they can predict the tail nodes directly. In this study, we propose to include node neighborhoods as additional information to improve KGC methods based on language models. We examine the effects of this imputation and show that, on both inductive and transductive Wikidata subsets, our method outperforms KGT5 and conventional KGC approaches. We also provide an extensive analysis of the impact of neighborhood on model prediction and show its importance. Furthermore, we point the way to significantly improve KGC through more effective neighborhood selection.
|
[
"Chepurova, Alla",
"Bulatov, Aydar",
"Kuratov, Yuri",
"Burtsev, Mikhail"
] |
Better Together: Enhancing Generative Knowledge Graph Completion with Language Models and Neighborhood Information
|
findings-emnlp.352
|
2311.01326
|
[
"https://github.com/screemix/kgc-t5-with-neighbors"
] |
https://huggingface.co/papers/2311.01326
| 3 | 2 | 0 | 4 |
[] |
[] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.353.bib
|
https://aclanthology.org/2023.findings-emnlp.353/
|
@inproceedings{xie-etal-2023-deltascore,
title = "{D}elta{S}core: Fine-Grained Story Evaluation with Perturbations",
author = "Xie, Zhuohan and
Li, Miao and
Cohn, Trevor and
Lau, Jey",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.353",
doi = "10.18653/v1/2023.findings-emnlp.353",
pages = "5317--5331",
abstract = "Numerous evaluation metrics have been developed for natural language generation tasks, but their effectiveness in evaluating stories is limited as they are not specifically tailored to assess intricate aspects of storytelling, such as fluency and interestingness. In this paper, we introduce DeltaScore, a novel methodology that uses perturbation techniques for the evaluation of nuanced story aspects. We posit that the extent to which a story excels in a specific aspect (e.g., fluency) correlates with the magnitude of its susceptibility to particular perturbations (e.g., the introduction of typos). Given this, we measure the quality of an aspect by calculating the likelihood difference between pre- and post-perturbation states using pre-trained language models. We compare DeltaScore with existing metrics on storytelling datasets from two domains in five fine-grained story aspects: fluency, coherence, relatedness, logicality, and interestingness. DeltaScore demonstrates strong performance, revealing a surprising finding that one specific perturbation proves highly effective in capturing multiple aspects. Source code is available on our GitHub repository.",
}
|
Numerous evaluation metrics have been developed for natural language generation tasks, but their effectiveness in evaluating stories is limited as they are not specifically tailored to assess intricate aspects of storytelling, such as fluency and interestingness. In this paper, we introduce DeltaScore, a novel methodology that uses perturbation techniques for the evaluation of nuanced story aspects. We posit that the extent to which a story excels in a specific aspect (e.g., fluency) correlates with the magnitude of its susceptibility to particular perturbations (e.g., the introduction of typos). Given this, we measure the quality of an aspect by calculating the likelihood difference between pre- and post-perturbation states using pre-trained language models. We compare DeltaScore with existing metrics on storytelling datasets from two domains in five fine-grained story aspects: fluency, coherence, relatedness, logicality, and interestingness. DeltaScore demonstrates strong performance, revealing a surprising finding that one specific perturbation proves highly effective in capturing multiple aspects. Source code is available on our GitHub repository.
|
[
"Xie, Zhuohan",
"Li, Miao",
"Cohn, Trevor",
"Lau, Jey"
] |
DeltaScore: Fine-Grained Story Evaluation with Perturbations
|
findings-emnlp.353
|
2303.08991
|
[
"https://github.com/zhuohanx/deltascore"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.354.bib
|
https://aclanthology.org/2023.findings-emnlp.354/
|
@inproceedings{lu-etal-2023-mug,
title = "{M}u{G}: A Multimodal Classification Benchmark on Game Data with Tabular, Textual, and Visual Fields",
author = "Lu, Jiaying and
Qian, Yongchen and
Zhao, Shifan and
Xi, Yuanzhe and
Yang, Carl",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.354",
doi = "10.18653/v1/2023.findings-emnlp.354",
pages = "5332--5346",
abstract = "Previous research has demonstrated the advantages of integrating data from multiple sources over traditional unimodal data, leading to the emergence of numerous novel multimodal applications. We propose a multimodal classification benchmark MuG with eight datasets that allows researchers to evaluate and improve their models. These datasets are collected from four various genres of games that cover tabular, textual, and visual modalities. We conduct multi-aspect data analysis to provide insights into the benchmark, including label balance ratios, percentages of missing features, distributions of data within each modality, and the correlations between labels and input modalities. We further present experimental results obtained by several state-of-the-art unimodal classifiers and multimodal classifiers, which demonstrate the challenging and multimodal-dependent properties of the benchmark. MuG is released at https://github.com/lujiaying/MUG-Bench with the data, tutorials, and implemented baselines.",
}
|
Previous research has demonstrated the advantages of integrating data from multiple sources over traditional unimodal data, leading to the emergence of numerous novel multimodal applications. We propose a multimodal classification benchmark MuG with eight datasets that allows researchers to evaluate and improve their models. These datasets are collected from four various genres of games that cover tabular, textual, and visual modalities. We conduct multi-aspect data analysis to provide insights into the benchmark, including label balance ratios, percentages of missing features, distributions of data within each modality, and the correlations between labels and input modalities. We further present experimental results obtained by several state-of-the-art unimodal classifiers and multimodal classifiers, which demonstrate the challenging and multimodal-dependent properties of the benchmark. MuG is released at https://github.com/lujiaying/MUG-Bench with the data, tutorials, and implemented baselines.
|
[
"Lu, Jiaying",
"Qian, Yongchen",
"Zhao, Shifan",
"Xi, Yuanzhe",
"Yang, Carl"
] |
MuG: A Multimodal Classification Benchmark on Game Data with Tabular, Textual, and Visual Fields
|
findings-emnlp.354
|
2302.02978
|
[
"https://github.com/lujiaying/mug-bench"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.355.bib
|
https://aclanthology.org/2023.findings-emnlp.355/
|
@inproceedings{wu-etal-2023-dont,
title = "Don{'}t waste a single annotation: improving single-label classifiers through soft labels",
author = "Wu, Ben and
Li, Yue and
Mu, Yida and
Scarton, Carolina and
Bontcheva, Kalina and
Song, Xingyi",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.355",
doi = "10.18653/v1/2023.findings-emnlp.355",
pages = "5347--5355",
abstract = "In this paper, we address the limitations of the common data annotation and training methods for objective single-label classification tasks. Typically, when annotating such tasks annotators are only asked to provide a single label for each sample and annotator disagreement is discarded when a final hard label is decided through majority voting. We challenge this traditional approach, acknowledging that determining the appropriate label can be difficult due to the ambiguity and lack of context in the data samples. Rather than discarding the information from such ambiguous annotations, our soft label method makes use of them for training. Our findings indicate that additional annotator information, such as confidence, secondary label and disagreement, can be used to effectively generate soft labels. Training classifiers with these soft labels then leads to improved performance and calibration on the hard label test set.",
}
|
In this paper, we address the limitations of the common data annotation and training methods for objective single-label classification tasks. Typically, when annotating such tasks annotators are only asked to provide a single label for each sample and annotator disagreement is discarded when a final hard label is decided through majority voting. We challenge this traditional approach, acknowledging that determining the appropriate label can be difficult due to the ambiguity and lack of context in the data samples. Rather than discarding the information from such ambiguous annotations, our soft label method makes use of them for training. Our findings indicate that additional annotator information, such as confidence, secondary label and disagreement, can be used to effectively generate soft labels. Training classifiers with these soft labels then leads to improved performance and calibration on the hard label test set.
|
[
"Wu, Ben",
"Li, Yue",
"Mu, Yida",
"Scarton, Carolina",
"Bontcheva, Kalina",
"Song, Xingyi"
] |
Don't waste a single annotation: improving single-label classifiers through soft labels
|
findings-emnlp.355
|
2311.05265
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.356.bib
|
https://aclanthology.org/2023.findings-emnlp.356/
|
@inproceedings{guo-etal-2023-black,
title = "Black-Box Tuning of Vision-Language Models with Effective Gradient Approximation",
author = "Guo, Zixian and
Wei, Yuxiang and
Liu, Ming and
Ji, Zhilong and
Bai, Jinfeng and
Guo, Yiwen and
Zuo, Wangmeng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.356",
doi = "10.18653/v1/2023.findings-emnlp.356",
pages = "5356--5368",
abstract = "Parameter-efficient fine-tuning (PEFT) methods have provided an effective way for adapting large vision-language models to specific tasks or scenarios. Typically, they learn a very small scale of parameters for pre-trained models in a white-box formulation, which assumes model architectures to be known and parameters to be accessible. However, large models are often not open-source due to considerations of preventing abuse or commercial factors, hence posing a barrier to the deployment of white-box PEFT methods. To alleviate the dependence on model accessibility, we introduce collaborative black-box tuning (CBBT) for both textual prompt optimization and output feature adaptation for black-box models. Specifically, considering that the backpropagation gradients are blocked, we approximate the gradients of textual prompts by analyzing the predictions with perturbed prompts. Secondly, a lightweight adapter is deployed over the output feature of the inaccessible model, further facilitating the model adaptation process. Empowered with these designs, our CBBT is extensively evaluated on eleven downstream benchmarks and achieves remarkable improvements compared to existing black-box VL adaptation methods. Our code will be made publicly available.",
}
|
Parameter-efficient fine-tuning (PEFT) methods have provided an effective way for adapting large vision-language models to specific tasks or scenarios. Typically, they learn a very small scale of parameters for pre-trained models in a white-box formulation, which assumes model architectures to be known and parameters to be accessible. However, large models are often not open-source due to considerations of preventing abuse or commercial factors, hence posing a barrier to the deployment of white-box PEFT methods. To alleviate the dependence on model accessibility, we introduce collaborative black-box tuning (CBBT) for both textual prompt optimization and output feature adaptation for black-box models. Specifically, considering that the backpropagation gradients are blocked, we approximate the gradients of textual prompts by analyzing the predictions with perturbed prompts. Secondly, a lightweight adapter is deployed over the output feature of the inaccessible model, further facilitating the model adaptation process. Empowered with these designs, our CBBT is extensively evaluated on eleven downstream benchmarks and achieves remarkable improvements compared to existing black-box VL adaptation methods. Our code will be made publicly available.
|
[
"Guo, Zixian",
"Wei, Yuxiang",
"Liu, Ming",
"Ji, Zhilong",
"Bai, Jinfeng",
"Guo, Yiwen",
"Zuo, Wangmeng"
] |
Black-Box Tuning of Vision-Language Models with Effective Gradient Approximation
|
findings-emnlp.356
|
2312.15901
|
[
"https://github.com/guozix/cbbt"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.357.bib
|
https://aclanthology.org/2023.findings-emnlp.357/
|
@inproceedings{bai-etal-2023-determine,
title = "How to Determine the Most Powerful Pre-trained Language Model without Brute Force Fine-tuning? An Empirical Survey",
author = "Bai, Jun and
Zhang, Xiaofeng and
Li, Chen and
Hong, Hanhua and
Xu, Xi and
Lin, Chenghua and
Rong, Wenge",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.357",
doi = "10.18653/v1/2023.findings-emnlp.357",
pages = "5369--5382",
abstract = "Transferability estimation has been attached to great attention in the computer vision fields. Researchers try to estimate with low computational cost the performance of a model when transferred from a source task to a given target task. Considering the effectiveness of such estimations, the communities of natural language processing also began to study similar problems for the selection of pre-trained language models. However, there is a lack of a comprehensive comparison between these estimation methods yet. Also, the differences between vision and language scenarios make it doubtful whether previous conclusions can be established across fields. In this paper, we first conduct a thorough survey of existing transferability estimation methods being able to find the most suitable model, then we conduct a detailed empirical study for the surveyed methods based on the GLUE benchmark. From qualitative and quantitative analyses, we demonstrate the strengths and weaknesses of existing methods and show that H-Score generally performs well with superiorities in effectiveness and efficiency. We also outline the difficulties of consideration of training details, applicability to text generation, and consistency to certain metrics which shed light on future directions.",
}
|
Transferability estimation has been attached to great attention in the computer vision fields. Researchers try to estimate with low computational cost the performance of a model when transferred from a source task to a given target task. Considering the effectiveness of such estimations, the communities of natural language processing also began to study similar problems for the selection of pre-trained language models. However, there is a lack of a comprehensive comparison between these estimation methods yet. Also, the differences between vision and language scenarios make it doubtful whether previous conclusions can be established across fields. In this paper, we first conduct a thorough survey of existing transferability estimation methods being able to find the most suitable model, then we conduct a detailed empirical study for the surveyed methods based on the GLUE benchmark. From qualitative and quantitative analyses, we demonstrate the strengths and weaknesses of existing methods and show that H-Score generally performs well with superiorities in effectiveness and efficiency. We also outline the difficulties of consideration of training details, applicability to text generation, and consistency to certain metrics which shed light on future directions.
|
[
"Bai, Jun",
"Zhang, Xiaofeng",
"Li, Chen",
"Hong, Hanhua",
"Xu, Xi",
"Lin, Chenghua",
"Rong, Wenge"
] |
How to Determine the Most Powerful Pre-trained Language Model without Brute Force Fine-tuning? An Empirical Survey
|
findings-emnlp.357
|
2312.04775
|
[
"https://github.com/ba1jun/model-selection-nlp"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.358.bib
|
https://aclanthology.org/2023.findings-emnlp.358/
|
@inproceedings{yu-etal-2023-licon,
title = "Licon: A Diverse, Controllable and Challenging Linguistic Concept Learning Benchmark",
author = "Yu, Shenglong and
Zhang, Ying and
Guo, Wenya and
Zhang, Zhengkun and
Zhou, Ru and
Yuan, Xiaojie",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.358",
doi = "10.18653/v1/2023.findings-emnlp.358",
pages = "5383--5398",
abstract = "Concept Learning requires learning the definition of a general category from given training examples. Most of the existing methods focus on learning concepts from images. However, the visual information cannot present abstract concepts exactly, which struggles the introduction of novel concepts related to known concepts (e.g., {`}Plant{'}$\rightarrow${`}Asteroids{'}). In this paper, inspired by the fact that humans learn most concepts through linguistic description, we introduce Linguistic Concept Learning benchmark (Licon), where concepts in diverse forms (e.g., plain attributes, images, and text) are defined by linguistic descriptions. The difficulty to learn novel concepts can be controlled by the number of attributes or the hierarchical relationships between concepts. The diverse and controllable concepts are used to support challenging evaluation tasks, including concept classification, attribute prediction, and concept relationship recognition. In addition, we design an entailment-based concept learning method (EnC) to model the relationship among concepts. Extensive experiments demonstrate the effectiveness of EnC. The benchmark will be released to the public soon.",
}
|
Concept Learning requires learning the definition of a general category from given training examples. Most of the existing methods focus on learning concepts from images. However, the visual information cannot present abstract concepts exactly, which struggles the introduction of novel concepts related to known concepts (e.g., {`}Plant{'}$\rightarrow${`}Asteroids{'}). In this paper, inspired by the fact that humans learn most concepts through linguistic description, we introduce Linguistic Concept Learning benchmark (Licon), where concepts in diverse forms (e.g., plain attributes, images, and text) are defined by linguistic descriptions. The difficulty to learn novel concepts can be controlled by the number of attributes or the hierarchical relationships between concepts. The diverse and controllable concepts are used to support challenging evaluation tasks, including concept classification, attribute prediction, and concept relationship recognition. In addition, we design an entailment-based concept learning method (EnC) to model the relationship among concepts. Extensive experiments demonstrate the effectiveness of EnC. The benchmark will be released to the public soon.
|
[
"Yu, Shenglong",
"Zhang, Ying",
"Guo, Wenya",
"Zhang, Zhengkun",
"Zhou, Ru",
"Yuan, Xiaojie"
] |
Licon: A Diverse, Controllable and Challenging Linguistic Concept Learning Benchmark
|
findings-emnlp.358
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.359.bib
|
https://aclanthology.org/2023.findings-emnlp.359/
|
@inproceedings{feldhus-etal-2023-interrolang,
title = "{I}nterro{L}ang: Exploring {NLP} Models and Datasets through Dialogue-based Explanations",
author = {Feldhus, Nils and
Wang, Qianli and
Anikina, Tatiana and
Chopra, Sahil and
Oguz, Cennet and
M{\"o}ller, Sebastian},
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.359",
doi = "10.18653/v1/2023.findings-emnlp.359",
pages = "5399--5421",
abstract = "While recently developed NLP explainability methods let us open the black box in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is an interactive tool offering a conversational interface. Such a dialogue system can help users explore datasets and models with explanations in a contextualized manner, e.g. via clarification or follow-up questions, and through a natural language interface. We adapt the conversational explanation framework TalkToModel (Slack et al., 2022) to the NLP domain, add new NLP-specific operations such as free-text rationalization, and illustrate its generalizability on three NLP tasks (dialogue act classification, question answering, hate speech detection). To recognize user queries for explanations, we evaluate fine-tuned and few-shot prompting models and implement a novel adapter-based approach. We then conduct two user studies on (1) the perceived correctness and helpfulness of the dialogues, and (2) the simulatability, i.e. how objectively helpful dialogical explanations are for humans in figuring out the model{'}s predicted label when it{'}s not shown. We found rationalization and feature attribution were helpful in explaining the model behavior. Moreover, users could more reliably predict the model outcome based on an explanation dialogue rather than one-off explanations.",
}
|
While recently developed NLP explainability methods let us open the black box in various ways (Madsen et al., 2022), a missing ingredient in this endeavor is an interactive tool offering a conversational interface. Such a dialogue system can help users explore datasets and models with explanations in a contextualized manner, e.g. via clarification or follow-up questions, and through a natural language interface. We adapt the conversational explanation framework TalkToModel (Slack et al., 2022) to the NLP domain, add new NLP-specific operations such as free-text rationalization, and illustrate its generalizability on three NLP tasks (dialogue act classification, question answering, hate speech detection). To recognize user queries for explanations, we evaluate fine-tuned and few-shot prompting models and implement a novel adapter-based approach. We then conduct two user studies on (1) the perceived correctness and helpfulness of the dialogues, and (2) the simulatability, i.e. how objectively helpful dialogical explanations are for humans in figuring out the model{'}s predicted label when it{'}s not shown. We found rationalization and feature attribution were helpful in explaining the model behavior. Moreover, users could more reliably predict the model outcome based on an explanation dialogue rather than one-off explanations.
|
[
"Feldhus, Nils",
"Wang, Qianli",
"Anikina, Tatiana",
"Chopra, Sahil",
"Oguz, Cennet",
"M{\\\"o}ller, Sebastian"
] |
InterroLang: Exploring NLP Models and Datasets through Dialogue-based Explanations
|
findings-emnlp.359
|
2310.05592
|
[
"https://github.com/dfki-nlp/interrolang"
] |
https://huggingface.co/papers/2310.05592
| 1 | 0 | 0 | 6 |
[] |
[] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.360.bib
|
https://aclanthology.org/2023.findings-emnlp.360/
|
@inproceedings{ramakrishna-etal-2023-invite,
title = "{INVITE}: a Testbed of Automatically Generated Invalid Questions to Evaluate Large Language Models for Hallucinations",
author = "Ramakrishna, Anil and
Gupta, Rahul and
Lehmann, Jens and
Ziyadi, Morteza",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.360",
doi = "10.18653/v1/2023.findings-emnlp.360",
pages = "5422--5429",
abstract = "Recent advancements in Large language models (LLMs) have enabled them to hold free form conversations over multiple turns, but they exhibit a tendency to make unfounded and incorrect statements, commonly known as hallucinations. In particular, LLMs hallucinate frequently when given invalid questions, i.e. ones with incorrect assumptions. The most common approach to evaluate LLMs on hallucinations is to test them on Question Answering (QA) test sets such as TruthfulQA. However, LLMs are increasingly pretrained on massive text corpora scraped from the Internet, which may inevitably expose these test sets to the model during training, leading eventually to an overestimation of model performances on these test sets. In this work, we present an alternative framework to address this risk and to foster further research towards making LLMs robust against invalid questions. We name our framework INVITE: a testbed of automatically generated INValId questions to evaluaTE large language models for hallucinations. In each instantiation, our framework is set up to create a fresh batch of invalid questions by distorting valid facts in which subjects or objects are replaced by similar entities. We evaluate several state of the art LLMs against a testset generated by our framework and highlight its capacity to trigger hallucinations in these models.",
}
|
Recent advancements in Large language models (LLMs) have enabled them to hold free form conversations over multiple turns, but they exhibit a tendency to make unfounded and incorrect statements, commonly known as hallucinations. In particular, LLMs hallucinate frequently when given invalid questions, i.e. ones with incorrect assumptions. The most common approach to evaluate LLMs on hallucinations is to test them on Question Answering (QA) test sets such as TruthfulQA. However, LLMs are increasingly pretrained on massive text corpora scraped from the Internet, which may inevitably expose these test sets to the model during training, leading eventually to an overestimation of model performances on these test sets. In this work, we present an alternative framework to address this risk and to foster further research towards making LLMs robust against invalid questions. We name our framework INVITE: a testbed of automatically generated INValId questions to evaluaTE large language models for hallucinations. In each instantiation, our framework is set up to create a fresh batch of invalid questions by distorting valid facts in which subjects or objects are replaced by similar entities. We evaluate several state of the art LLMs against a testset generated by our framework and highlight its capacity to trigger hallucinations in these models.
|
[
"Ramakrishna, Anil",
"Gupta, Rahul",
"Lehmann, Jens",
"Ziyadi, Morteza"
] |
INVITE: a Testbed of Automatically Generated Invalid Questions to Evaluate Large Language Models for Hallucinations
|
findings-emnlp.360
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.361.bib
|
https://aclanthology.org/2023.findings-emnlp.361/
|
@inproceedings{akhtar-etal-2023-multimodal,
title = "Multimodal Automated Fact-Checking: A Survey",
author = "Akhtar, Mubashara and
Schlichtkrull, Michael and
Guo, Zhijiang and
Cocarascu, Oana and
Simperl, Elena and
Vlachos, Andreas",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.361",
doi = "10.18653/v1/2023.findings-emnlp.361",
pages = "5430--5448",
abstract = "Misinformation is often conveyed in multiple modalities, e.g. a miscaptioned image. Multimodal misinformation is perceived as more credible by humans, and spreads faster than its text-only counterparts. While an increasing body of research investigates automated fact-checking (AFC), previous surveys mostly focus on text. In this survey, we conceptualise a framework for AFC including subtasks unique to multimodal misinformation. Furthermore, we discuss related terms used in different communities and map them to our framework. We focus on four modalities prevalent in real-world fact-checking: text, image, audio, and video. We survey benchmarks and models, and discuss limitations and promising directions for future research",
}
|
Misinformation is often conveyed in multiple modalities, e.g. a miscaptioned image. Multimodal misinformation is perceived as more credible by humans, and spreads faster than its text-only counterparts. While an increasing body of research investigates automated fact-checking (AFC), previous surveys mostly focus on text. In this survey, we conceptualise a framework for AFC including subtasks unique to multimodal misinformation. Furthermore, we discuss related terms used in different communities and map them to our framework. We focus on four modalities prevalent in real-world fact-checking: text, image, audio, and video. We survey benchmarks and models, and discuss limitations and promising directions for future research
|
[
"Akhtar, Mubashara",
"Schlichtkrull, Michael",
"Guo, Zhijiang",
"Cocarascu, Oana",
"Simperl, Elena",
"Vlachos, Andreas"
] |
Multimodal Automated Fact-Checking: A Survey
|
findings-emnlp.361
|
2305.13507
|
[
"https://github.com/cartus/automated-fact-checking-resources"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.362.bib
|
https://aclanthology.org/2023.findings-emnlp.362/
|
@inproceedings{puranik-etal-2023-protege,
title = "{PROTEGE}: Prompt-based Diverse Question Generation from Web Articles",
author = "Puranik, Vinayak and
Majumder, Anirban and
Chaoji, Vineet",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.362",
doi = "10.18653/v1/2023.findings-emnlp.362",
pages = "5449--5463",
abstract = "Rich and diverse knowledge bases (KB) are foundational building blocks for online knowledge sharing communities such as StackOverflow and Quora, and applications such as conversational assistants (aka chatbots). A popular format for knowledge bases is question-answer pairs (or FAQs), where questions are designed to accurately match a multitude of queries. In this paper, we address the problem of automatic creation of such Q{\&}A-based knowledge bases from domain-specific, long-form textual content (e.g., web articles). Specifically, we consider the problem of question generation, which is the task of generating questions given a paragraph of text as input, with a goal to achieve both diversity and fidelity of the generated questions. Towards this goal we propose PROTEGE, a diverse question generation framework which consists of (1) a novel encoder-decoder based Large Language Model (LLM) architecture which can take a variety of prompts and generate a diverse set of candidate questions, and (2) a hill-climbing algorithm that maximizes a sub-modular objective function to balance diversity with fidelity. Through our experiments on three popular public Q{\&}A datasets, we demonstrate that PROTEGE improves diversity by +16{\%} and fidelity by +8{\%} over diverse beam search and prompt-based baselines.",
}
|
Rich and diverse knowledge bases (KB) are foundational building blocks for online knowledge sharing communities such as StackOverflow and Quora, and applications such as conversational assistants (aka chatbots). A popular format for knowledge bases is question-answer pairs (or FAQs), where questions are designed to accurately match a multitude of queries. In this paper, we address the problem of automatic creation of such Q{\&}A-based knowledge bases from domain-specific, long-form textual content (e.g., web articles). Specifically, we consider the problem of question generation, which is the task of generating questions given a paragraph of text as input, with a goal to achieve both diversity and fidelity of the generated questions. Towards this goal we propose PROTEGE, a diverse question generation framework which consists of (1) a novel encoder-decoder based Large Language Model (LLM) architecture which can take a variety of prompts and generate a diverse set of candidate questions, and (2) a hill-climbing algorithm that maximizes a sub-modular objective function to balance diversity with fidelity. Through our experiments on three popular public Q{\&}A datasets, we demonstrate that PROTEGE improves diversity by +16{\%} and fidelity by +8{\%} over diverse beam search and prompt-based baselines.
|
[
"Puranik, Vinayak",
"Majumder, Anirban",
"Chaoji, Vineet"
] |
PROTEGE: Prompt-based Diverse Question Generation from Web Articles
|
findings-emnlp.362
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.363.bib
|
https://aclanthology.org/2023.findings-emnlp.363/
|
@inproceedings{hsu-etal-2023-gpt,
title = "{GPT}-4 as an Effective Zero-Shot Evaluator for Scientific Figure Captions",
author = "Hsu, Ting-Yao and
Huang, Chieh-Yang and
Rossi, Ryan and
Kim, Sungchul and
Giles, C. and
Huang, Ting-Hao",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.363",
doi = "10.18653/v1/2023.findings-emnlp.363",
pages = "5464--5474",
abstract = "There is growing interest in systems that generate captions for scientific figures. However, assessing these systems{'} output poses a significant challenge. Human evaluation requires academic expertise and is costly, while automatic evaluation depends on often low-quality author-written captions. This paper investigates using large language models (LLMs) as a cost-effective, reference-free method for evaluating figure captions. We first constructed SCICAP-EVAL, a human evaluation dataset that contains human judgments for 3,600 scientific figure captions, both original and machine-made, for 600 arXiv figures. We then prompted LLMs like GPT-4 and GPT-3 to score (1-6) each caption based on its potential to aid reader understanding, given relevant context such as figure-mentioning paragraphs. Results show that GPT-4, used as a zero-shot evaluator, outperformed all other models and even surpassed assessments made by computer science undergraduates, achieving a Kendall correlation score of 0.401 with Ph.D. students{'} rankings.",
}
|
There is growing interest in systems that generate captions for scientific figures. However, assessing these systems{'} output poses a significant challenge. Human evaluation requires academic expertise and is costly, while automatic evaluation depends on often low-quality author-written captions. This paper investigates using large language models (LLMs) as a cost-effective, reference-free method for evaluating figure captions. We first constructed SCICAP-EVAL, a human evaluation dataset that contains human judgments for 3,600 scientific figure captions, both original and machine-made, for 600 arXiv figures. We then prompted LLMs like GPT-4 and GPT-3 to score (1-6) each caption based on its potential to aid reader understanding, given relevant context such as figure-mentioning paragraphs. Results show that GPT-4, used as a zero-shot evaluator, outperformed all other models and even surpassed assessments made by computer science undergraduates, achieving a Kendall correlation score of 0.401 with Ph.D. students{'} rankings.
|
[
"Hsu, Ting-Yao",
"Huang, Chieh-Yang",
"Rossi, Ryan",
"Kim, Sungchul",
"Giles, C.",
"Huang, Ting-Hao"
] |
GPT-4 as an Effective Zero-Shot Evaluator for Scientific Figure Captions
|
findings-emnlp.363
|
2310.15405
|
[
""
] |
https://huggingface.co/papers/2310.15405
| 1 | 1 | 0 | 6 |
[] |
[] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.364.bib
|
https://aclanthology.org/2023.findings-emnlp.364/
|
@inproceedings{fu-etal-2023-mulan,
title = "Mulan: A Multi-Level Alignment Model for Video Question Answering",
author = "Fu, Yu and
Cao, Cong and
Yang, Yuling and
Lu, Yuhai and
Yuan, Fangfang and
Wang, Dakui and
Liu, Yanbing",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.364",
doi = "10.18653/v1/2023.findings-emnlp.364",
pages = "5475--5489",
abstract = "Video Question Answering (VideoQA) aims to answer questions about the visual content of a video. Current methods mainly focus on improving joint representations of video and text. However, these methods pay little attention to the fine-grained semantic interaction between video and text. In this paper, we propose Mulan: a Multi-Level Alignment Model for Video Question Answering, which establishes alignment between visual and textual modalities at the object-level, frame-level, and video-level. Specifically, for object-level alignment, we propose a mask-guided visual feature encoding method and a visual-guided text description method to learn fine-grained spatial information. For frame-level alignment, we introduce the use of visual features from individual frames, combined with a caption generator, to learn overall spatial information within the scene. For video-level alignment, we propose an expandable ordinal prompt for textual descriptions, combined with visual features, to learn temporal information. Experimental results show that our method outperforms the state-of-the-art methods, even when utilizing the smallest amount of extra visual-language pre-training data and a reduced number of trainable parameters.",
}
|
Video Question Answering (VideoQA) aims to answer questions about the visual content of a video. Current methods mainly focus on improving joint representations of video and text. However, these methods pay little attention to the fine-grained semantic interaction between video and text. In this paper, we propose Mulan: a Multi-Level Alignment Model for Video Question Answering, which establishes alignment between visual and textual modalities at the object-level, frame-level, and video-level. Specifically, for object-level alignment, we propose a mask-guided visual feature encoding method and a visual-guided text description method to learn fine-grained spatial information. For frame-level alignment, we introduce the use of visual features from individual frames, combined with a caption generator, to learn overall spatial information within the scene. For video-level alignment, we propose an expandable ordinal prompt for textual descriptions, combined with visual features, to learn temporal information. Experimental results show that our method outperforms the state-of-the-art methods, even when utilizing the smallest amount of extra visual-language pre-training data and a reduced number of trainable parameters.
|
[
"Fu, Yu",
"Cao, Cong",
"Yang, Yuling",
"Lu, Yuhai",
"Yuan, Fangfang",
"Wang, Dakui",
"Liu, Yanbing"
] |
Mulan: A Multi-Level Alignment Model for Video Question Answering
|
findings-emnlp.364
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.365.bib
|
https://aclanthology.org/2023.findings-emnlp.365/
|
@inproceedings{yang-etal-2023-hare,
title = "{HARE}: Explainable Hate Speech Detection with Step-by-Step Reasoning",
author = "Yang, Yongjin and
Kim, Joonkee and
Kim, Yujin and
Ho, Namgyu and
Thorne, James and
Yun, Se-Young",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.365",
doi = "10.18653/v1/2023.findings-emnlp.365",
pages = "5490--5505",
abstract = "With the proliferation of social media, accurate detection of hate speech has become critical to ensure safety online. To combat nuanced forms of hate speech, it is important to identify and thoroughly explain hate speech to help users understand its harmful effects. Recent benchmarks have attempted to tackle this issue by training generative models on free-text annotations of implications in hateful text. However, we find significant reasoning gaps in the existing annotations schemes, which may hinder the supervision of detection models. In this paper, we introduce a hate speech detection framework, **HARE**, which harnesses the reasoning capabilities of large language models (LLMs) to fill these gaps in explanations of hate speech, thus enabling effective supervision of detection models. Experiments on SBIC and Implicit Hate benchmarks show that our method, using model-generated data, consistently outperforms baselines, using existing free-text human annotations. Analysis demonstrates that our method enhances the explanation quality of trained models and improves generalization to unseen datasets. Our code is available at https://github.com/joonkeekim/hare-hate-speech.git.",
}
|
With the proliferation of social media, accurate detection of hate speech has become critical to ensure safety online. To combat nuanced forms of hate speech, it is important to identify and thoroughly explain hate speech to help users understand its harmful effects. Recent benchmarks have attempted to tackle this issue by training generative models on free-text annotations of implications in hateful text. However, we find significant reasoning gaps in the existing annotations schemes, which may hinder the supervision of detection models. In this paper, we introduce a hate speech detection framework, **HARE**, which harnesses the reasoning capabilities of large language models (LLMs) to fill these gaps in explanations of hate speech, thus enabling effective supervision of detection models. Experiments on SBIC and Implicit Hate benchmarks show that our method, using model-generated data, consistently outperforms baselines, using existing free-text human annotations. Analysis demonstrates that our method enhances the explanation quality of trained models and improves generalization to unseen datasets. Our code is available at https://github.com/joonkeekim/hare-hate-speech.git.
|
[
"Yang, Yongjin",
"Kim, Joonkee",
"Kim, Yujin",
"Ho, Namgyu",
"Thorne, James",
"Yun, Se-Young"
] |
HARE: Explainable Hate Speech Detection with Step-by-Step Reasoning
|
findings-emnlp.365
|
2311.00321
|
[
"https://github.com/joonkeekim/hare-hate-speech"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.366.bib
|
https://aclanthology.org/2023.findings-emnlp.366/
|
@inproceedings{shi-etal-2023-relm,
title = "{R}e{LM}: Leveraging Language Models for Enhanced Chemical Reaction Prediction",
author = "Shi, Yaorui and
Zhang, An and
Zhang, Enzhi and
Liu, Zhiyuan and
Wang, Xiang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.366",
doi = "10.18653/v1/2023.findings-emnlp.366",
pages = "5506--5520",
abstract = "Predicting chemical reactions, a fundamental challenge in chemistry, involves forecasting the resulting products from a given reaction process. Conventional techniques, notably those employing Graph Neural Networks (GNNs), are often limited by insufficient training data and their inability to utilize textual information, undermining their applicability in real-world applications. In this work, we propose **ReLM**, a novel framework that leverages the chemical knowledge encoded in language models (LMs) to assist GNNs, thereby enhancing the accuracy of real-world chemical reaction predictions. To further enhance the model{'}s robustness and interpretability, we incorporate the confidence score strategy, enabling the LMs to self-assess the reliability of their predictions. Our experimental results demonstrate that ReLM improves the performance of state-of-the-art GNN-based methods across various chemical reaction datasets, especially in out-of-distribution settings. Codes are available at https://github.com/syr-cn/ReLM.",
}
|
Predicting chemical reactions, a fundamental challenge in chemistry, involves forecasting the resulting products from a given reaction process. Conventional techniques, notably those employing Graph Neural Networks (GNNs), are often limited by insufficient training data and their inability to utilize textual information, undermining their applicability in real-world applications. In this work, we propose **ReLM**, a novel framework that leverages the chemical knowledge encoded in language models (LMs) to assist GNNs, thereby enhancing the accuracy of real-world chemical reaction predictions. To further enhance the model{'}s robustness and interpretability, we incorporate the confidence score strategy, enabling the LMs to self-assess the reliability of their predictions. Our experimental results demonstrate that ReLM improves the performance of state-of-the-art GNN-based methods across various chemical reaction datasets, especially in out-of-distribution settings. Codes are available at https://github.com/syr-cn/ReLM.
|
[
"Shi, Yaorui",
"Zhang, An",
"Zhang, Enzhi",
"Liu, Zhiyuan",
"Wang, Xiang"
] |
ReLM: Leveraging Language Models for Enhanced Chemical Reaction Prediction
|
findings-emnlp.366
|
2310.13590
|
[
"https://github.com/syr-cn/relm"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.367.bib
|
https://aclanthology.org/2023.findings-emnlp.367/
|
@inproceedings{lin-etal-2023-decomposing,
title = "Decomposing Complex Queries for Tip-of-the-tongue Retrieval",
author = "Lin, Kevin and
Lo, Kyle and
Gonzalez, Joseph and
Klein, Dan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.367",
doi = "10.18653/v1/2023.findings-emnlp.367",
pages = "5521--5533",
abstract = "When re-finding items, users who forget or are uncertain about identifying details often rely on creative strategies for expressing their information needs{---}complex queries that describe content elements (e.g., book characters or events), information beyond the document text (e.g., descriptions of book covers), or personal context (e.g., when they read a book). Standard retrieval models that rely on lexical or semantic overlap between query and document text are challenged in such retrieval settings, known as tip-of-the-tongue (TOT) retrieval. We introduce a simple but effective framework for handling such complex queries by decomposing the query with an LLM into individual clues routing those as subqueries to specialized retrievers, and ensembling the results. Our approach takes advantage of off-the-shelf retrievers (e.g., CLIP for retrieving images of book covers) or incorporate retriever-specific logic (e.g., date constraints). We show that our framework incorporating query decomposition into retrievers can improve gold book recall up to 6{\%} absolute gain for Recall@5 on a new collection of 14,441 real-world query-book pairs from an online community for resolving TOT inquiries.",
}
|
When re-finding items, users who forget or are uncertain about identifying details often rely on creative strategies for expressing their information needs{---}complex queries that describe content elements (e.g., book characters or events), information beyond the document text (e.g., descriptions of book covers), or personal context (e.g., when they read a book). Standard retrieval models that rely on lexical or semantic overlap between query and document text are challenged in such retrieval settings, known as tip-of-the-tongue (TOT) retrieval. We introduce a simple but effective framework for handling such complex queries by decomposing the query with an LLM into individual clues routing those as subqueries to specialized retrievers, and ensembling the results. Our approach takes advantage of off-the-shelf retrievers (e.g., CLIP for retrieving images of book covers) or incorporate retriever-specific logic (e.g., date constraints). We show that our framework incorporating query decomposition into retrievers can improve gold book recall up to 6{\%} absolute gain for Recall@5 on a new collection of 14,441 real-world query-book pairs from an online community for resolving TOT inquiries.
|
[
"Lin, Kevin",
"Lo, Kyle",
"Gonzalez, Joseph",
"Klein, Dan"
] |
Decomposing Complex Queries for Tip-of-the-tongue Retrieval
|
findings-emnlp.367
|
2305.15053
|
[
""
] |
https://huggingface.co/papers/2305.15053
| 1 | 0 | 0 | 4 |
[] |
[
"nlpkevinl/whatsthatbook"
] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.368.bib
|
https://aclanthology.org/2023.findings-emnlp.368/
|
@inproceedings{vida-etal-2023-values,
title = "Values, Ethics, Morals? On the Use of Moral Concepts in {NLP} Research",
author = "Vida, Karina and
Simon, Judith and
Lauscher, Anne",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.368",
doi = "10.18653/v1/2023.findings-emnlp.368",
pages = "5534--5554",
abstract = "With language technology increasingly affecting individuals{'} lives, many recent works have investigated the ethical aspects of NLP. Among other topics, researchers focused on the notion of morality, investigating, for example, which moral judgements language models make. However, there has been little to no discussion of the terminology and the theories underpinning those efforts and their implications. This lack is highly problematic, as it hides the works{'} underlying assumptions and hinders a thorough and targeted scientific debate of morality in NLP. In this work, we address this research gap by (a) providing an overview of some important ethical concepts stemming from philosophy and (b) systematically surveying the existing literature on moral NLP w.r.t. their philosophical foundation, terminology, and data basis. For instance, we analyse what ethical theory an approach is based on, how this decision is justified, and what implications it entails. Our findings surveying 92 papers show that, for instance, most papers neither provide a clear definition of the terms they use nor adhere to definitions from philosophy. Finally, (c) we give three recommendations for future research in the field. We hope our work will lead to a more informed, careful, and sound discussion of morality in language technology.",
}
|
With language technology increasingly affecting individuals{'} lives, many recent works have investigated the ethical aspects of NLP. Among other topics, researchers focused on the notion of morality, investigating, for example, which moral judgements language models make. However, there has been little to no discussion of the terminology and the theories underpinning those efforts and their implications. This lack is highly problematic, as it hides the works{'} underlying assumptions and hinders a thorough and targeted scientific debate of morality in NLP. In this work, we address this research gap by (a) providing an overview of some important ethical concepts stemming from philosophy and (b) systematically surveying the existing literature on moral NLP w.r.t. their philosophical foundation, terminology, and data basis. For instance, we analyse what ethical theory an approach is based on, how this decision is justified, and what implications it entails. Our findings surveying 92 papers show that, for instance, most papers neither provide a clear definition of the terms they use nor adhere to definitions from philosophy. Finally, (c) we give three recommendations for future research in the field. We hope our work will lead to a more informed, careful, and sound discussion of morality in language technology.
|
[
"Vida, Karina",
"Simon, Judith",
"Lauscher, Anne"
] |
Values, Ethics, Morals? On the Use of Moral Concepts in NLP Research
|
findings-emnlp.368
|
2310.13915
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.369.bib
|
https://aclanthology.org/2023.findings-emnlp.369/
|
@inproceedings{wang-jansen-2023-self,
title = "Self-Supervised Behavior Cloned Transformers are Path Crawlers for Text Games",
author = "Wang, Ruoyao and
Jansen, Peter",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.369",
doi = "10.18653/v1/2023.findings-emnlp.369",
pages = "5555--5565",
abstract = "In this work, we introduce a self-supervised behavior cloning transformer for text games, which are challenging benchmarks for multi-step reasoning in virtual environments. Traditionally, Behavior Cloning Transformers excel in such tasks but rely on supervised training data. Our approach auto-generates training data by exploring trajectories (defined by common macro-action sequences) that lead to reward within the games, while determining the generality and utility of these trajectories by rapidly training small models then evalauating their performance on unseen development games. Through empirical analysis, we show our method consistently uncovers generalizable training data, achieving about 90{\%} performance of supervised systems across three benchmark text games.",
}
|
In this work, we introduce a self-supervised behavior cloning transformer for text games, which are challenging benchmarks for multi-step reasoning in virtual environments. Traditionally, Behavior Cloning Transformers excel in such tasks but rely on supervised training data. Our approach auto-generates training data by exploring trajectories (defined by common macro-action sequences) that lead to reward within the games, while determining the generality and utility of these trajectories by rapidly training small models then evalauating their performance on unseen development games. Through empirical analysis, we show our method consistently uncovers generalizable training data, achieving about 90{\%} performance of supervised systems across three benchmark text games.
|
[
"Wang, Ruoyao",
"Jansen, Peter"
] |
Self-Supervised Behavior Cloned Transformers are Path Crawlers for Text Games
|
findings-emnlp.369
|
2312.04657
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.370.bib
|
https://aclanthology.org/2023.findings-emnlp.370/
|
@inproceedings{xiong-etal-2023-adapting,
title = "Adapting Pretrained Text-to-Text Models for Long Text Sequences",
author = "Xiong, Wenhan and
Gupta, Anchit and
Toshniwal, Shubham and
Mehdad, Yashar and
Yih, Scott",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.370",
doi = "10.18653/v1/2023.findings-emnlp.370",
pages = "5566--5578",
abstract = "We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline {--} model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with \textit{pooling-augmented blockwise attention}, and pretrain the model with a masked-span prediction task with spans of varying lengths. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large open-domain corpus results in better performance than using existing long document corpora, which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on \textit{five} long-text summarization datasets, often outperforming previous methods with larger model sizes.",
}
|
We present an empirical study of adapting an existing pretrained text-to-text model for long-sequence inputs. Through a comprehensive study along three axes of the pretraining pipeline {--} model architecture, optimization objective, and pretraining corpus, we propose an effective recipe to build long-context models from existing short-context models. Specifically, we replace the full attention in transformers with \textit{pooling-augmented blockwise attention}, and pretrain the model with a masked-span prediction task with spans of varying lengths. In terms of the pretraining corpus, we find that using randomly concatenated short-documents from a large open-domain corpus results in better performance than using existing long document corpora, which are typically limited in their domain coverage. With these findings, we build a long-context model that achieves competitive performance on long-text QA tasks and establishes the new state of the art on \textit{five} long-text summarization datasets, often outperforming previous methods with larger model sizes.
|
[
"Xiong, Wenhan",
"Gupta, Anchit",
"Toshniwal, Shubham",
"Mehdad, Yashar",
"Yih, Scott"
] |
Adapting Pretrained Text-to-Text Models for Long Text Sequences
|
findings-emnlp.370
|
2209.10052
|
[
"https://github.com/facebookresearch/bart_ls"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.371.bib
|
https://aclanthology.org/2023.findings-emnlp.371/
|
@inproceedings{zhang-etal-2023-xdial,
title = "x{D}ial-Eval: A Multilingual Open-Domain Dialogue Evaluation Benchmark",
author = "Zhang, Chen and
D{'}Haro, Luis and
Tang, Chengguang and
Shi, Ke and
Tang, Guohua and
Li, Haizhou",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.371",
doi = "10.18653/v1/2023.findings-emnlp.371",
pages = "5579--5601",
abstract = "Recent advancements in reference-free learned metrics for open-domain dialogue evaluation have been driven by the progress in pre-trained language models and the availability of dialogue data with high-quality human annotations. However, current studies predominantly concentrate on English dialogues, and the generalization of these metrics to other languages has not been fully examined. This is largely due to the absence of a multilingual dialogue evaluation benchmark. To address the issue, we introduce xDial-Eval, built on top of open-source English dialogue evaluation datasets. xDial-Eval includes 12 turn-level and 6 dialogue-level English datasets, comprising 14930 annotated turns and 8691 annotated dialogues respectively. The English dialogue data are extended to nine other languages with commercial machine translation systems. On xDial-Eval, we conduct comprehensive analyses of previous BERT-based metrics and the recently-emerged large language models. Lastly, we establish strong self-supervised and multilingual baselines. In terms of average Pearson correlations over all datasets and languages, the best baseline outperforms OpenAI{'}s ChatGPT by absolute improvements of 6.5{\%} and 4.6{\%} at the turn and dialogue levels respectively, albeit with much fewer parameters. The data and code are publicly available at https://github.com/e0397123/xDial-Eval.",
}
|
Recent advancements in reference-free learned metrics for open-domain dialogue evaluation have been driven by the progress in pre-trained language models and the availability of dialogue data with high-quality human annotations. However, current studies predominantly concentrate on English dialogues, and the generalization of these metrics to other languages has not been fully examined. This is largely due to the absence of a multilingual dialogue evaluation benchmark. To address the issue, we introduce xDial-Eval, built on top of open-source English dialogue evaluation datasets. xDial-Eval includes 12 turn-level and 6 dialogue-level English datasets, comprising 14930 annotated turns and 8691 annotated dialogues respectively. The English dialogue data are extended to nine other languages with commercial machine translation systems. On xDial-Eval, we conduct comprehensive analyses of previous BERT-based metrics and the recently-emerged large language models. Lastly, we establish strong self-supervised and multilingual baselines. In terms of average Pearson correlations over all datasets and languages, the best baseline outperforms OpenAI{'}s ChatGPT by absolute improvements of 6.5{\%} and 4.6{\%} at the turn and dialogue levels respectively, albeit with much fewer parameters. The data and code are publicly available at https://github.com/e0397123/xDial-Eval.
|
[
"Zhang, Chen",
"D{'}Haro, Luis",
"Tang, Chengguang",
"Shi, Ke",
"Tang, Guohua",
"Li, Haizhou"
] |
xDial-Eval: A Multilingual Open-Domain Dialogue Evaluation Benchmark
|
findings-emnlp.371
|
2310.08958
|
[
"https://github.com/e0397123/xdial-eval"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.372.bib
|
https://aclanthology.org/2023.findings-emnlp.372/
|
@inproceedings{macina-etal-2023-mathdial,
title = "{M}ath{D}ial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties Grounded in Math Reasoning Problems",
author = "Macina, Jakub and
Daheim, Nico and
Chowdhury, Sankalan and
Sinha, Tanmay and
Kapur, Manu and
Gurevych, Iryna and
Sachan, Mrinmaya",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.372",
doi = "10.18653/v1/2023.findings-emnlp.372",
pages = "5602--5621",
abstract = "While automatic dialogue tutors hold great potential in making education personalized and more accessible, research on such systems has been hampered by a lack of sufficiently large and high-quality datasets. Collecting such datasets remains challenging, as recording tutoring sessions raises privacy concerns and crowdsourcing leads to insufficient data quality. To address this, we propose a framework to generate such dialogues by pairing human teachers with a Large Language Model (LLM) prompted to represent common student errors. We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues grounded in multi-step math reasoning problems. While models like GPT-3 are good problem solvers, they fail at tutoring because they generate factually incorrect feedback or are prone to revealing solutions to students too early. To overcome this, we let teachers provide learning opportunities to students by guiding them using various scaffolding questions according to a taxonomy of teacher moves. We demonstrate MathDial and its extensive annotations can be used to finetune models to be more effective tutors (and not just solvers). We confirm this by automatic and human evaluation, notably in an interactive setting that measures the trade-off between student solving success and telling solutions. The dataset is released publicly.",
}
|
While automatic dialogue tutors hold great potential in making education personalized and more accessible, research on such systems has been hampered by a lack of sufficiently large and high-quality datasets. Collecting such datasets remains challenging, as recording tutoring sessions raises privacy concerns and crowdsourcing leads to insufficient data quality. To address this, we propose a framework to generate such dialogues by pairing human teachers with a Large Language Model (LLM) prompted to represent common student errors. We describe how we use this framework to collect MathDial, a dataset of 3k one-to-one teacher-student tutoring dialogues grounded in multi-step math reasoning problems. While models like GPT-3 are good problem solvers, they fail at tutoring because they generate factually incorrect feedback or are prone to revealing solutions to students too early. To overcome this, we let teachers provide learning opportunities to students by guiding them using various scaffolding questions according to a taxonomy of teacher moves. We demonstrate MathDial and its extensive annotations can be used to finetune models to be more effective tutors (and not just solvers). We confirm this by automatic and human evaluation, notably in an interactive setting that measures the trade-off between student solving success and telling solutions. The dataset is released publicly.
|
[
"Macina, Jakub",
"Daheim, Nico",
"Chowdhury, Sankalan",
"Sinha, Tanmay",
"Kapur, Manu",
"Gurevych, Iryna",
"Sachan, Mrinmaya"
] |
MathDial: A Dialogue Tutoring Dataset with Rich Pedagogical Properties Grounded in Math Reasoning Problems
|
findings-emnlp.372
|
2305.14536
|
[
"https://github.com/eth-nlped/mathdial"
] |
https://huggingface.co/papers/2305.14536
| 1 | 1 | 0 | 7 |
[] |
[
"eth-nlped/mathdial"
] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.373.bib
|
https://aclanthology.org/2023.findings-emnlp.373/
|
@inproceedings{peng-etal-2023-towards,
title = "Towards Making the Most of {C}hat{GPT} for Machine Translation",
author = "Peng, Keqin and
Ding, Liang and
Zhong, Qihuang and
Shen, Li and
Liu, Xuebo and
Zhang, Min and
Ouyang, Yuanxin and
Tao, Dacheng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.373",
doi = "10.18653/v1/2023.findings-emnlp.373",
pages = "5622--5633",
abstract = "ChatGPT shows remarkable capabilities for machine translation (MT). Several prior studies have shown that it achieves comparable results to commercial systems for high-resource languages, but lags behind in complex tasks, e.g, low-resource and distant-language-pairs translation. However, they usually adopt simple prompts which can not fully elicit the capability of ChatGPT. In this report, we aim to further mine ChatGPT{'}s translation ability by revisiting several aspects: temperature, task information, and domain information, and correspondingly propose two (simple but effective) prompts: Task-Specific Prompts (TSP) and Domain-Specific Prompts (DSP). We show that: 1) The performance of ChatGPT depends largely on temperature, and a lower temperature usually can achieve better performance; 2) Emphasizing the task information further improves ChatGPT{'}s performance, particularly in complex MT tasks; 3) Introducing domain information can elicit ChatGPT{'}s generalization ability and improve its performance in the specific domain; 4) ChatGPT tends to generate hallucinations for non-English-centric MT tasks, which can be partially addressed by our proposed prompts but still need to be highlighted for the MT/NLP community. We also explore the effects of advanced in-context learning strategies and find a (negative but interesting) observation: the powerful chain-of-thought prompt leads to word-by-word translation behavior, thus bringing significant translation degradation.",
}
|
ChatGPT shows remarkable capabilities for machine translation (MT). Several prior studies have shown that it achieves comparable results to commercial systems for high-resource languages, but lags behind in complex tasks, e.g, low-resource and distant-language-pairs translation. However, they usually adopt simple prompts which can not fully elicit the capability of ChatGPT. In this report, we aim to further mine ChatGPT{'}s translation ability by revisiting several aspects: temperature, task information, and domain information, and correspondingly propose two (simple but effective) prompts: Task-Specific Prompts (TSP) and Domain-Specific Prompts (DSP). We show that: 1) The performance of ChatGPT depends largely on temperature, and a lower temperature usually can achieve better performance; 2) Emphasizing the task information further improves ChatGPT{'}s performance, particularly in complex MT tasks; 3) Introducing domain information can elicit ChatGPT{'}s generalization ability and improve its performance in the specific domain; 4) ChatGPT tends to generate hallucinations for non-English-centric MT tasks, which can be partially addressed by our proposed prompts but still need to be highlighted for the MT/NLP community. We also explore the effects of advanced in-context learning strategies and find a (negative but interesting) observation: the powerful chain-of-thought prompt leads to word-by-word translation behavior, thus bringing significant translation degradation.
|
[
"Peng, Keqin",
"Ding, Liang",
"Zhong, Qihuang",
"Shen, Li",
"Liu, Xuebo",
"Zhang, Min",
"Ouyang, Yuanxin",
"Tao, Dacheng"
] |
Towards Making the Most of ChatGPT for Machine Translation
|
findings-emnlp.373
|
2303.13780
|
[
"https://github.com/romainpkq/chatgpt4mt"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.374.bib
|
https://aclanthology.org/2023.findings-emnlp.374/
|
@inproceedings{lu-etal-2023-enhancing,
title = "Enhancing Reasoning Capabilities by Instruction Learning and Chain-of-Thoughts for Implicit Discourse Relation Recognition",
author = "Lu, Yuxiang and
Hong, Yu and
Wang, Zhipang and
Zhou, Guodong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.374",
doi = "10.18653/v1/2023.findings-emnlp.374",
pages = "5634--5640",
abstract = "The aim of implicit discourse relation recognition is to comprehend the sense of connection between two arguments. In this work, we present a classification method that is solely based on generative models. Our proposed approach employs a combination of instruction templates and in-context learning to refine the generative model for effectively addressing the implicit discourse relation recognition task. Furthermore, we utilize Chain-of-Thoughts to partition the inference process into a sequence of three successive stages. This strategy enables us to fully utilize the autoregressive generative model{'}s potential for knowledge acquisition and inference, ultimately leading to enhanced performance on this natural language understanding task. The results of our experiments, evaluated on benchmark datasets PDTB 2.0, PDTB 3.0, and the CoNLL16 shared task, demonstrate superior performance compared to previous state-of-the-art models.",
}
|
The aim of implicit discourse relation recognition is to comprehend the sense of connection between two arguments. In this work, we present a classification method that is solely based on generative models. Our proposed approach employs a combination of instruction templates and in-context learning to refine the generative model for effectively addressing the implicit discourse relation recognition task. Furthermore, we utilize Chain-of-Thoughts to partition the inference process into a sequence of three successive stages. This strategy enables us to fully utilize the autoregressive generative model{'}s potential for knowledge acquisition and inference, ultimately leading to enhanced performance on this natural language understanding task. The results of our experiments, evaluated on benchmark datasets PDTB 2.0, PDTB 3.0, and the CoNLL16 shared task, demonstrate superior performance compared to previous state-of-the-art models.
|
[
"Lu, Yuxiang",
"Hong, Yu",
"Wang, Zhipang",
"Zhou, Guodong"
] |
Enhancing Reasoning Capabilities by Instruction Learning and Chain-of-Thoughts for Implicit Discourse Relation Recognition
|
findings-emnlp.374
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.375.bib
|
https://aclanthology.org/2023.findings-emnlp.375/
|
@inproceedings{jiang-etal-2023-large,
title = "Large-Scale and Multi-Perspective Opinion Summarization with Diverse Review Subsets",
author = "Jiang, Han and
Wang, Rui and
Wei, Zhihua and
Li, Yu and
Wang, Xinpeng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.375",
doi = "10.18653/v1/2023.findings-emnlp.375",
pages = "5641--5656",
abstract = "Opinion summarization is expected to digest larger review sets and provide summaries from different perspectives. However, most existing solutions are deficient in epitomizing extensive reviews and offering opinion summaries from various angles due to the lack of designs for information selection. To this end, we propose SubSumm, a supervised summarization framework for large-scale multi-perspective opinion summarization. SubSumm consists of a review sampling strategy set and a two-stage training scheme. The sampling strategies take sentiment orientation and contrastive information value into consideration, with which the review subsets from different perspectives and quality levels can be selected. Subsequently, the summarizer is encouraged to learn from the sub-optimal and optimal subsets successively in order to capitalize on the massive input. Experimental results on AmaSum and Rotten Tomatoes datasets demonstrate that SubSumm is adept at generating pros, cons, and verdict summaries from hundreds of input reviews. Furthermore, our in-depth analysis verifies that the advanced selection of review subsets and the two-stage training scheme are vital to boosting the summarization performance.",
}
|
Opinion summarization is expected to digest larger review sets and provide summaries from different perspectives. However, most existing solutions are deficient in epitomizing extensive reviews and offering opinion summaries from various angles due to the lack of designs for information selection. To this end, we propose SubSumm, a supervised summarization framework for large-scale multi-perspective opinion summarization. SubSumm consists of a review sampling strategy set and a two-stage training scheme. The sampling strategies take sentiment orientation and contrastive information value into consideration, with which the review subsets from different perspectives and quality levels can be selected. Subsequently, the summarizer is encouraged to learn from the sub-optimal and optimal subsets successively in order to capitalize on the massive input. Experimental results on AmaSum and Rotten Tomatoes datasets demonstrate that SubSumm is adept at generating pros, cons, and verdict summaries from hundreds of input reviews. Furthermore, our in-depth analysis verifies that the advanced selection of review subsets and the two-stage training scheme are vital to boosting the summarization performance.
|
[
"Jiang, Han",
"Wang, Rui",
"Wei, Zhihua",
"Li, Yu",
"Wang, Xinpeng"
] |
Large-Scale and Multi-Perspective Opinion Summarization with Diverse Review Subsets
|
findings-emnlp.375
|
2310.13340
|
[
"https://github.com/salomeeeee/subsumm"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.376.bib
|
https://aclanthology.org/2023.findings-emnlp.376/
|
@inproceedings{you-ko-2023-topic,
title = "Topic-Informed Dialogue Summarization using Topic Distribution and Prompt-based Modeling",
author = "You, Jaeah and
Ko, Youngjoong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.376",
doi = "10.18653/v1/2023.findings-emnlp.376",
pages = "5657--5663",
abstract = "Dealing with multiple topics should be considered an important issue in dialogue summarization, because dialogues, unlike documents, are prone to topic drift. Thus, we propose a new dialogue summarization model that reflects dialogue topic distribution to consider all topics present in the dialogue. First, the distribution of dialogue topics is estimated by an effective topic discovery model. Then topic-informed prompt transfers estimated topic distribution information to the output of encoder and decoder vectors. Finally, the topic extractor estimates the summary topic distribution from the output context vector of decoder to distinguish its difference from the dialogue topic distribution. To consider the proportion of each topic distribution appeared in the dialogue, the extractor is trained to reduce the difference between the distributions of the dialogue and the summary. The experimental results on SAMSum and DialogSum show that our model outperforms state-of-the-art methods on ROUGE scores. The human evaluation results also show that our framework well generates comprehensive summaries.",
}
|
Dealing with multiple topics should be considered an important issue in dialogue summarization, because dialogues, unlike documents, are prone to topic drift. Thus, we propose a new dialogue summarization model that reflects dialogue topic distribution to consider all topics present in the dialogue. First, the distribution of dialogue topics is estimated by an effective topic discovery model. Then topic-informed prompt transfers estimated topic distribution information to the output of encoder and decoder vectors. Finally, the topic extractor estimates the summary topic distribution from the output context vector of decoder to distinguish its difference from the dialogue topic distribution. To consider the proportion of each topic distribution appeared in the dialogue, the extractor is trained to reduce the difference between the distributions of the dialogue and the summary. The experimental results on SAMSum and DialogSum show that our model outperforms state-of-the-art methods on ROUGE scores. The human evaluation results also show that our framework well generates comprehensive summaries.
|
[
"You, Jaeah",
"Ko, Youngjoong"
] |
Topic-Informed Dialogue Summarization using Topic Distribution and Prompt-based Modeling
|
findings-emnlp.376
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.377.bib
|
https://aclanthology.org/2023.findings-emnlp.377/
|
@inproceedings{hong-etal-2023-disentangling,
title = "Disentangling Structure and Style: Political Bias Detection in News by Inducing Document Hierarchy",
author = "Hong, Jiwoo and
Cho, Yejin and
Han, Jiyoung and
Jung, Jaemin and
Thorne, James",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.377",
doi = "10.18653/v1/2023.findings-emnlp.377",
pages = "5664--5686",
abstract = "We address an important gap in detecting political bias in news articles. Previous works that perform document classification can be influenced by the writing style of each news outlet, leading to overfitting and limited generalizability. Our approach overcomes this limitation by considering both the sentence-level semantics and the document-level rhetorical structure, resulting in a more robust and style-agnostic approach to detecting political bias in news articles. We introduce a novel multi-head hierarchical attention model that effectively encodes the structure of long documents through a diverse ensemble of attention heads. While journalism follows a formalized rhetorical structure, the writing style may vary by news outlet. We demonstrate that our method overcomes this domain dependency and outperforms previous approaches for robustness and accuracy. Further analysis and human evaluation demonstrate the ability of our model to capture common discourse structures in journalism.",
}
|
We address an important gap in detecting political bias in news articles. Previous works that perform document classification can be influenced by the writing style of each news outlet, leading to overfitting and limited generalizability. Our approach overcomes this limitation by considering both the sentence-level semantics and the document-level rhetorical structure, resulting in a more robust and style-agnostic approach to detecting political bias in news articles. We introduce a novel multi-head hierarchical attention model that effectively encodes the structure of long documents through a diverse ensemble of attention heads. While journalism follows a formalized rhetorical structure, the writing style may vary by news outlet. We demonstrate that our method overcomes this domain dependency and outperforms previous approaches for robustness and accuracy. Further analysis and human evaluation demonstrate the ability of our model to capture common discourse structures in journalism.
|
[
"Hong, Jiwoo",
"Cho, Yejin",
"Han, Jiyoung",
"Jung, Jaemin",
"Thorne, James"
] |
Disentangling Structure and Style: Political Bias Detection in News by Inducing Document Hierarchy
|
findings-emnlp.377
|
2304.02247
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.378.bib
|
https://aclanthology.org/2023.findings-emnlp.378/
|
@inproceedings{press-etal-2023-measuring,
title = "Measuring and Narrowing the Compositionality Gap in Language Models",
author = "Press, Ofir and
Zhang, Muru and
Min, Sewon and
Schmidt, Ludwig and
Smith, Noah and
Lewis, Mike",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.378",
doi = "10.18653/v1/2023.findings-emnlp.378",
pages = "5687--5711",
abstract = "We investigate the ability of language models to perform compositional reasoning tasks where the overall solution depends on correctly composing the answers to sub-problems. We measure how often models can correctly answer all sub-problems but not generate the overall solution, a ratio we call the compositionality gap. We evaluate this ratio by asking multi-hop questions with answers that require composing multiple facts unlikely to have been observed together during pretraining. In the GPT-3 family of models, as model size increases we show that the single-hop question answering performance improves faster than the multi-hop performance does, therefore the compositionality gap does not decrease. This surprising result suggests that while more powerful models memorize and recall more factual knowledge, they show no corresponding improvement in their ability to perform this kind of compositional reasoning. We then demonstrate how elicitive prompting (such as chain of thought) narrows the compositionality gap by reasoning explicitly instead of implicitly. We present a new method, self-ask, that further improves on chain of thought. In our method, the model explicitly asks itself (and then answers) follow-up questions before answering the initial question. We finally show that self-ask{'}s structured prompting lets us easily plug in a search engine to answer the follow-up questions, which additionally improves accuracy.",
}
|
We investigate the ability of language models to perform compositional reasoning tasks where the overall solution depends on correctly composing the answers to sub-problems. We measure how often models can correctly answer all sub-problems but not generate the overall solution, a ratio we call the compositionality gap. We evaluate this ratio by asking multi-hop questions with answers that require composing multiple facts unlikely to have been observed together during pretraining. In the GPT-3 family of models, as model size increases we show that the single-hop question answering performance improves faster than the multi-hop performance does, therefore the compositionality gap does not decrease. This surprising result suggests that while more powerful models memorize and recall more factual knowledge, they show no corresponding improvement in their ability to perform this kind of compositional reasoning. We then demonstrate how elicitive prompting (such as chain of thought) narrows the compositionality gap by reasoning explicitly instead of implicitly. We present a new method, self-ask, that further improves on chain of thought. In our method, the model explicitly asks itself (and then answers) follow-up questions before answering the initial question. We finally show that self-ask{'}s structured prompting lets us easily plug in a search engine to answer the follow-up questions, which additionally improves accuracy.
|
[
"Press, Ofir",
"Zhang, Muru",
"Min, Sewon",
"Schmidt, Ludwig",
"Smith, Noah",
"Lewis, Mike"
] |
Measuring and Narrowing the Compositionality Gap in Language Models
|
findings-emnlp.378
|
2210.03350
|
[
"https://github.com/ofirpress/self-ask"
] |
https://huggingface.co/papers/2210.03350
| 0 | 0 | 0 | 6 |
[] |
[
"dylanalloy/ehc-contrived-financial",
"chiayewken/bamboogle",
"csujeong/financial_company_revenue"
] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.379.bib
|
https://aclanthology.org/2023.findings-emnlp.379/
|
@inproceedings{wang-etal-2023-unsupervised,
title = "Unsupervised Candidate Answer Extraction through Differentiable Masker-Reconstructor Model",
author = "Wang, Zhuoer and
Wang, Yicheng and
Zhu, Ziwei and
Caverlee, James",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.379",
doi = "10.18653/v1/2023.findings-emnlp.379",
pages = "5712--5723",
abstract = "Question generation is a widely used data augmentation approach with extensive applications, and extracting qualified candidate answers from context passages is a critical step for most question generation systems. However, existing methods for candidate answer extraction are reliant on linguistic rules or annotated data that face the partial annotation issue and challenges in generalization. To overcome these limitations, we propose a novel unsupervised candidate answer extraction approach that leverages the inherent structure of context passages through a Differentiable Masker-Reconstructor (DMR) Model with the enforcement of self-consistency for picking up salient information tokens. We curated two datasets with exhaustively-annotated answers and benchmark a comprehensive set of supervised and unsupervised candidate answer extraction methods. We demonstrate the effectiveness of the DMR model by showing its performance is superior among unsupervised methods and comparable to supervised methods.",
}
|
Question generation is a widely used data augmentation approach with extensive applications, and extracting qualified candidate answers from context passages is a critical step for most question generation systems. However, existing methods for candidate answer extraction are reliant on linguistic rules or annotated data that face the partial annotation issue and challenges in generalization. To overcome these limitations, we propose a novel unsupervised candidate answer extraction approach that leverages the inherent structure of context passages through a Differentiable Masker-Reconstructor (DMR) Model with the enforcement of self-consistency for picking up salient information tokens. We curated two datasets with exhaustively-annotated answers and benchmark a comprehensive set of supervised and unsupervised candidate answer extraction methods. We demonstrate the effectiveness of the DMR model by showing its performance is superior among unsupervised methods and comparable to supervised methods.
|
[
"Wang, Zhuoer",
"Wang, Yicheng",
"Zhu, Ziwei",
"Caverlee, James"
] |
Unsupervised Candidate Answer Extraction through Differentiable Masker-Reconstructor Model
|
findings-emnlp.379
|
2310.13106
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.380.bib
|
https://aclanthology.org/2023.findings-emnlp.380/
|
@inproceedings{song-etal-2023-honeybee,
title = "{H}oney{B}ee: Progressive Instruction Finetuning of Large Language Models for Materials Science",
author = "Song, Yu and
Miret, Santiago and
Zhang, Huan and
Liu, Bang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.380",
doi = "10.18653/v1/2023.findings-emnlp.380",
pages = "5724--5739",
abstract = "We propose an instruction-based process for trustworthy data curation in materials science (MatSci-Instruct), which we then apply to finetune a LLaMa-based language model targeted for materials science (HoneyBee). MatSci-Instruct helps alleviate the scarcity of relevant, high-quality materials science textual data available in the open literature, and HoneyBee is the first billion-parameter language model specialized to materials science. In MatSci-Instruct we improve the trustworthiness of generated data by prompting multiple commercially available large language models for generation with an Instructor module (e.g. Chat-GPT) and verification from an independent Verifier module (e.g. Claude). Using MatSci-Instruct, we construct a dataset of multiple tasks and measure the quality of our dataset along multiple dimensions, including accuracy against known facts, relevance to materials science, as well as completeness and reasonableness of the data. Moreover, we iteratively generate more targeted instructions and instruction-data in a finetuning-evaluation-feedback loop leading to progressively better performance for our finetuned HoneyBee models. Our evaluation on the MatSci-NLP benchmark shows HoneyBee{'}s outperformance of existing language models on materials science tasks and iterative improvement in successive stages of instruction-data refinement. We study the quality of HoneyBee{'}s language modeling through automatic evaluation and analyze case studies to further understand the model{'}s capabilities and limitations. Our code and relevant datasets are publicly available at https://github.com/BangLab-UdeM-Mila/NLP4MatSci-HoneyBee.",
}
|
We propose an instruction-based process for trustworthy data curation in materials science (MatSci-Instruct), which we then apply to finetune a LLaMa-based language model targeted for materials science (HoneyBee). MatSci-Instruct helps alleviate the scarcity of relevant, high-quality materials science textual data available in the open literature, and HoneyBee is the first billion-parameter language model specialized to materials science. In MatSci-Instruct we improve the trustworthiness of generated data by prompting multiple commercially available large language models for generation with an Instructor module (e.g. Chat-GPT) and verification from an independent Verifier module (e.g. Claude). Using MatSci-Instruct, we construct a dataset of multiple tasks and measure the quality of our dataset along multiple dimensions, including accuracy against known facts, relevance to materials science, as well as completeness and reasonableness of the data. Moreover, we iteratively generate more targeted instructions and instruction-data in a finetuning-evaluation-feedback loop leading to progressively better performance for our finetuned HoneyBee models. Our evaluation on the MatSci-NLP benchmark shows HoneyBee{'}s outperformance of existing language models on materials science tasks and iterative improvement in successive stages of instruction-data refinement. We study the quality of HoneyBee{'}s language modeling through automatic evaluation and analyze case studies to further understand the model{'}s capabilities and limitations. Our code and relevant datasets are publicly available at https://github.com/BangLab-UdeM-Mila/NLP4MatSci-HoneyBee.
|
[
"Song, Yu",
"Miret, Santiago",
"Zhang, Huan",
"Liu, Bang"
] |
HoneyBee: Progressive Instruction Finetuning of Large Language Models for Materials Science
|
findings-emnlp.380
|
2310.08511
|
[
"https://github.com/BangLab-UdeM-Mila/NLP4MatSci-HoneyBee"
] |
https://huggingface.co/papers/2310.08511
| 1 | 0 | 0 | 4 |
[] |
[] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.381.bib
|
https://aclanthology.org/2023.findings-emnlp.381/
|
@inproceedings{luo-etal-2023-prompt,
title = "Prompt-Based Editing for Text Style Transfer",
author = "Luo, Guoqing and
Han, Yu and
Mou, Lili and
Firdaus, Mauajama",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.381",
doi = "10.18653/v1/2023.findings-emnlp.381",
pages = "5740--5750",
abstract = "Prompting approaches have been recently explored in text style transfer, where a textual prompt is used to query a pretrained language model (PLM) to generate style-transferred texts word by word in an autoregressive manner. However, such a generation process is less controllable and early prediction errors may affect future word predictions. In this paper, we propose a prompt-based editing approach to text style transfer. Specifically, we prompt a PLM for style classification and use the classification probability to compute a style score. Then, we perform discrete search with word-level editing to maximize a comprehensive scoring function for the style-transfer task. In this way, we transform a prompt-based generation problem into a classification one, which does not suffer from the error accumulation problem and is more controllable than the autoregressive generation of sentences. In our experiments, we performed both automatic and human evaluation on three style-transfer benchmark datasets, and show that our approach largely outperforms the existing systems that have 20 times more parameters. Additional empirical analyses further demonstrate the effectiveness of our approach.",
}
|
Prompting approaches have been recently explored in text style transfer, where a textual prompt is used to query a pretrained language model (PLM) to generate style-transferred texts word by word in an autoregressive manner. However, such a generation process is less controllable and early prediction errors may affect future word predictions. In this paper, we propose a prompt-based editing approach to text style transfer. Specifically, we prompt a PLM for style classification and use the classification probability to compute a style score. Then, we perform discrete search with word-level editing to maximize a comprehensive scoring function for the style-transfer task. In this way, we transform a prompt-based generation problem into a classification one, which does not suffer from the error accumulation problem and is more controllable than the autoregressive generation of sentences. In our experiments, we performed both automatic and human evaluation on three style-transfer benchmark datasets, and show that our approach largely outperforms the existing systems that have 20 times more parameters. Additional empirical analyses further demonstrate the effectiveness of our approach.
|
[
"Luo, Guoqing",
"Han, Yu",
"Mou, Lili",
"Firdaus, Mauajama"
] |
Prompt-Based Editing for Text Style Transfer
|
findings-emnlp.381
|
2301.11997
|
[
"https://github.com/manga-uofa/prompt-edit"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.382.bib
|
https://aclanthology.org/2023.findings-emnlp.382/
|
@inproceedings{dogruoz-etal-2023-representativeness,
title = "Representativeness as a Forgotten Lesson for Multilingual and Code-switched Data Collection and Preparation",
author = {Do{\u{g}}ru{\"o}z, A. Seza and
Sitaram, Sunayana and
Yong, Zheng Xin},
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.382",
doi = "10.18653/v1/2023.findings-emnlp.382",
pages = "5751--5767",
abstract = "Multilingualism is widespread around the world and code-switching (CSW) is a common practice among different language pairs/tuples across locations and regions. However, there is still not much progress in building successful CSW systems, despite the recent advances in Massive Multilingual Language Models (MMLMs). We investigate the reasons behind this setback through a critical study about the existing CSW data sets (68) across language pairs in terms of the collection and preparation (e.g. transcription and annotation) stages. This in-depth analysis reveals that \textbf{a)} most CSW data involves English ignoring other language pairs/tuples \textbf{b)} there are flaws in terms of representativeness in data collection and preparation stages due to ignoring the location based, socio-demographic and register variation in CSW. In addition, lack of clarity on the data selection and filtering stages shadow the representativeness of CSW data sets. We conclude by providing a short check-list to improve the representativeness for forthcoming studies involving CSW data collection and preparation.",
}
|
Multilingualism is widespread around the world and code-switching (CSW) is a common practice among different language pairs/tuples across locations and regions. However, there is still not much progress in building successful CSW systems, despite the recent advances in Massive Multilingual Language Models (MMLMs). We investigate the reasons behind this setback through a critical study about the existing CSW data sets (68) across language pairs in terms of the collection and preparation (e.g. transcription and annotation) stages. This in-depth analysis reveals that \textbf{a)} most CSW data involves English ignoring other language pairs/tuples \textbf{b)} there are flaws in terms of representativeness in data collection and preparation stages due to ignoring the location based, socio-demographic and register variation in CSW. In addition, lack of clarity on the data selection and filtering stages shadow the representativeness of CSW data sets. We conclude by providing a short check-list to improve the representativeness for forthcoming studies involving CSW data collection and preparation.
|
[
"Do{\\u{g}}ru{\\\"o}z, A. Seza",
"Sitaram, Sunayana",
"Yong, Zheng Xin"
] |
Representativeness as a Forgotten Lesson for Multilingual and Code-switched Data Collection and Preparation
|
findings-emnlp.382
|
2310.20470
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.383.bib
|
https://aclanthology.org/2023.findings-emnlp.383/
|
@inproceedings{khan-etal-2023-nervous,
title = "{NER}vous About My Health: Constructing a {B}engali Medical Named Entity Recognition Dataset",
author = "Khan, Alvi and
Kamal, Fida and
Nower, Nuzhat and
Ahmed, Tasnim and
Ahmed, Sabbir and
Chowdhury, Tareque",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.383",
doi = "10.18653/v1/2023.findings-emnlp.383",
pages = "5768--5774",
abstract = "The ability to identify important entities in a text, known as Named Entity Recognition (NER), is useful in a large variety of downstream tasks in the biomedical domain. This is a considerably difficult task when working with Consumer Health Questions (CHQs), which consist of informal language used in day-to-day life by patients. These difficulties are amplified in the case of Bengali, which allows for a huge amount of flexibility in sentence structures and has significant variances in regional dialects. Unfortunately, the complexity of the language is not accurately reflected in the limited amount of available data, which makes it difficult to build a reliable decision-making system. To address the scarcity of data, this paper presents {`}Bangla-HealthNER{'}, a comprehensive dataset designed to identify named entities in health-related texts in the Bengali language. It consists of 31,783 samples sourced from a popular online public health platform, which allows it to capture the diverse range of linguistic styles and dialects used by native speakers from various regions in their day-to-day lives. The insight into this diversity in language will prove useful to any medical decision-making systems that are developed for use in real-world applications. To highlight the difficulty of the dataset, it has been benchmarked on state-of-the-art token classification models, where BanglishBERT achieved the highest performance with an F1-score of $56.13 \pm 0.75${\%}. The dataset and all relevant code used in this work have been made publicly available.",
}
|
The ability to identify important entities in a text, known as Named Entity Recognition (NER), is useful in a large variety of downstream tasks in the biomedical domain. This is a considerably difficult task when working with Consumer Health Questions (CHQs), which consist of informal language used in day-to-day life by patients. These difficulties are amplified in the case of Bengali, which allows for a huge amount of flexibility in sentence structures and has significant variances in regional dialects. Unfortunately, the complexity of the language is not accurately reflected in the limited amount of available data, which makes it difficult to build a reliable decision-making system. To address the scarcity of data, this paper presents {`}Bangla-HealthNER{'}, a comprehensive dataset designed to identify named entities in health-related texts in the Bengali language. It consists of 31,783 samples sourced from a popular online public health platform, which allows it to capture the diverse range of linguistic styles and dialects used by native speakers from various regions in their day-to-day lives. The insight into this diversity in language will prove useful to any medical decision-making systems that are developed for use in real-world applications. To highlight the difficulty of the dataset, it has been benchmarked on state-of-the-art token classification models, where BanglishBERT achieved the highest performance with an F1-score of $56.13 \pm 0.75${\%}. The dataset and all relevant code used in this work have been made publicly available.
|
[
"Khan, Alvi",
"Kamal, Fida",
"Nower, Nuzhat",
"Ahmed, Tasnim",
"Ahmed, Sabbir",
"Chowdhury, Tareque"
] |
NERvous About My Health: Constructing a Bengali Medical Named Entity Recognition Dataset
|
findings-emnlp.383
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.384.bib
|
https://aclanthology.org/2023.findings-emnlp.384/
|
@inproceedings{yu-etal-2023-sparse,
title = "Sparse Black-Box Multimodal Attack for Vision-Language Adversary Generation",
author = "Yu, Zhen and
Qin, Zhou and
Chen, Zhenhua and
Lian, Meihui and
Fu, Haojun and
Wen, Weigao and
Xue, Hui and
He, Kun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.384",
doi = "10.18653/v1/2023.findings-emnlp.384",
pages = "5775--5784",
abstract = "Deep neural networks have been widely applied in real-world scenarios, such as product restrictions on e-commerce and hate speech monitoring on social media, to ensure secure governance of various platforms. However, illegal merchants often deceive the detection models by adding large-scale perturbations to prohibited products, so as to earn illegal profits. Current adversarial attacks using imperceptible perturbations encounter challenges in simulating such adversarial behavior and evaluating the vulnerabilities of detection models to such perturbations. To address this issue, we propose a novel black-box multimodal attack, termed Sparse Multimodal Attack (SparseMA), which leverages sparse perturbations to simulate the adversarial behavior exhibited by illegal merchants in the black-box scenario. Moreover, SparseMA bridges the gap between images and texts by treating the separated image patches and text words uniformly in the discrete space. Extensive experiments demonstrate that SparseMA can identify the vulnerability of the model to different modalities, outperforming existing multimodal attacks and unimodal attacks. SparseMA, which is the first proposed method for black-box multimodal attacks to our knowledge, would be used as an effective tool for evaluating the robustness of multimodal models to different modalities.",
}
|
Deep neural networks have been widely applied in real-world scenarios, such as product restrictions on e-commerce and hate speech monitoring on social media, to ensure secure governance of various platforms. However, illegal merchants often deceive the detection models by adding large-scale perturbations to prohibited products, so as to earn illegal profits. Current adversarial attacks using imperceptible perturbations encounter challenges in simulating such adversarial behavior and evaluating the vulnerabilities of detection models to such perturbations. To address this issue, we propose a novel black-box multimodal attack, termed Sparse Multimodal Attack (SparseMA), which leverages sparse perturbations to simulate the adversarial behavior exhibited by illegal merchants in the black-box scenario. Moreover, SparseMA bridges the gap between images and texts by treating the separated image patches and text words uniformly in the discrete space. Extensive experiments demonstrate that SparseMA can identify the vulnerability of the model to different modalities, outperforming existing multimodal attacks and unimodal attacks. SparseMA, which is the first proposed method for black-box multimodal attacks to our knowledge, would be used as an effective tool for evaluating the robustness of multimodal models to different modalities.
|
[
"Yu, Zhen",
"Qin, Zhou",
"Chen, Zhenhua",
"Lian, Meihui",
"Fu, Haojun",
"Wen, Weigao",
"Xue, Hui",
"He, Kun"
] |
Sparse Black-Box Multimodal Attack for Vision-Language Adversary Generation
|
findings-emnlp.384
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.385.bib
|
https://aclanthology.org/2023.findings-emnlp.385/
|
@inproceedings{shi-etal-2023-towards,
title = "Towards a Unified Framework for Reference Retrieval and Related Work Generation",
author = "Shi, Zhengliang and
Gao, Shen and
Zhang, Zhen and
Chen, Xiuying and
Chen, Zhumin and
Ren, Pengjie and
Ren, Zhaochun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.385",
doi = "10.18653/v1/2023.findings-emnlp.385",
pages = "5785--5799",
abstract = "The task of related work generation aims to generate a comprehensive survey of related research topics automatically, saving time and effort for authors. Existing methods simplify this task by using human-annotated references in a large-scale scientific corpus as information sources, which is time- and cost-intensive. To this end, we propose a Unified Reference Retrieval and Related Work Generation Model (UR3WG), which combines reference retrieval and related work generation processes in a unified framework based on the large language model (LLM). Specifically, UR3WG first leverages the world knowledge of LLM to extend the abstract and generate the query for the subsequent retrieval stage. Then a lexicon-enhanced dense retrieval is proposed to search relevant references, where an importance-aware representation of the lexicon is introduced. We also propose multi-granularity contrastive learning to optimize our retriever. Since this task is not simply summarizing the main points in references, it should analyze the complex relationships and present them logically. We propose an instruction-tuning method to leverage LLM to generate related work. Extensive experiments on two wide-applied datasets demonstrate that our model outperforms the state-of-the-art baselines in both generation and retrieval metrics.",
}
|
The task of related work generation aims to generate a comprehensive survey of related research topics automatically, saving time and effort for authors. Existing methods simplify this task by using human-annotated references in a large-scale scientific corpus as information sources, which is time- and cost-intensive. To this end, we propose a Unified Reference Retrieval and Related Work Generation Model (UR3WG), which combines reference retrieval and related work generation processes in a unified framework based on the large language model (LLM). Specifically, UR3WG first leverages the world knowledge of LLM to extend the abstract and generate the query for the subsequent retrieval stage. Then a lexicon-enhanced dense retrieval is proposed to search relevant references, where an importance-aware representation of the lexicon is introduced. We also propose multi-granularity contrastive learning to optimize our retriever. Since this task is not simply summarizing the main points in references, it should analyze the complex relationships and present them logically. We propose an instruction-tuning method to leverage LLM to generate related work. Extensive experiments on two wide-applied datasets demonstrate that our model outperforms the state-of-the-art baselines in both generation and retrieval metrics.
|
[
"Shi, Zhengliang",
"Gao, Shen",
"Zhang, Zhen",
"Chen, Xiuying",
"Chen, Zhumin",
"Ren, Pengjie",
"Ren, Zhaochun"
] |
Towards a Unified Framework for Reference Retrieval and Related Work Generation
|
findings-emnlp.385
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.386.bib
|
https://aclanthology.org/2023.findings-emnlp.386/
|
@inproceedings{liu-etal-2023-visual-storytelling,
title = "Visual Storytelling with Question-Answer Plans",
author = "Liu, Danyang and
Lapata, Mirella and
Keller, Frank",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.386",
doi = "10.18653/v1/2023.findings-emnlp.386",
pages = "5800--5813",
abstract = "Visual storytelling aims to generate compelling narratives from image sequences. Existing models often focus on enhancing the representation of the image sequence, e.g., with external knowledge sources or advanced graph structures. Despite recent progress, the stories are often repetitive, illogical, and lacking in detail. To mitigate these issues, we present a novel framework which integrates visual representations with pretrained language models and planning. Our model translates the image sequence into a visual prefix, a sequence of continuous embeddings which language models can interpret. It also leverages a sequence of question-answer pairs as a blueprint plan for selecting salient visual concepts and determining how they should be assembled into a narrative. Automatic and human evaluation on the VIST benchmark demonstrates that blueprint-based models generate stories that are more coherent, interesting, and natural compared to competitive baselines and state-of-the-art systems.",
}
|
Visual storytelling aims to generate compelling narratives from image sequences. Existing models often focus on enhancing the representation of the image sequence, e.g., with external knowledge sources or advanced graph structures. Despite recent progress, the stories are often repetitive, illogical, and lacking in detail. To mitigate these issues, we present a novel framework which integrates visual representations with pretrained language models and planning. Our model translates the image sequence into a visual prefix, a sequence of continuous embeddings which language models can interpret. It also leverages a sequence of question-answer pairs as a blueprint plan for selecting salient visual concepts and determining how they should be assembled into a narrative. Automatic and human evaluation on the VIST benchmark demonstrates that blueprint-based models generate stories that are more coherent, interesting, and natural compared to competitive baselines and state-of-the-art systems.
|
[
"Liu, Danyang",
"Lapata, Mirella",
"Keller, Frank"
] |
Visual Storytelling with Question-Answer Plans
|
findings-emnlp.386
|
2310.05295
|
[
""
] |
https://huggingface.co/papers/2310.05295
| 0 | 1 | 0 | 3 |
[] |
[] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.387.bib
|
https://aclanthology.org/2023.findings-emnlp.387/
|
@inproceedings{aggarwal-etal-2023-investigating,
title = "Investigating Online Community Engagement through Stancetaking",
author = "Aggarwal, Jai and
Diep, Brian and
Watson, Julia and
Stevenson, Suzanne",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.387",
doi = "10.18653/v1/2023.findings-emnlp.387",
pages = "5814--5830",
abstract = "Much work has explored lexical and semantic variation in online communities, and drawn connections to community identity and user engagement patterns. Communities also express identity through the sociolinguistic concept of stancetaking. Large-scale computational work on stancetaking has explored community similarities in their preferences for stance markers {--} words that serve to indicate aspects of a speaker{'}s stance {--} without considering the stance-relevant properties of the contexts in which stance markers are used. We propose representations of stance contexts for 1798 Reddit communities and show how they capture community identity patterns distinct from textual or marker similarity measures. We also relate our stance context representations to broader inter- and intra-community engagement patterns, including cross-community posting patterns and social network properties of communities. Our findings highlight the strengths of using rich properties of stance as a way of revealing community identity and engagement patterns in online multi-community spaces.",
}
|
Much work has explored lexical and semantic variation in online communities, and drawn connections to community identity and user engagement patterns. Communities also express identity through the sociolinguistic concept of stancetaking. Large-scale computational work on stancetaking has explored community similarities in their preferences for stance markers {--} words that serve to indicate aspects of a speaker{'}s stance {--} without considering the stance-relevant properties of the contexts in which stance markers are used. We propose representations of stance contexts for 1798 Reddit communities and show how they capture community identity patterns distinct from textual or marker similarity measures. We also relate our stance context representations to broader inter- and intra-community engagement patterns, including cross-community posting patterns and social network properties of communities. Our findings highlight the strengths of using rich properties of stance as a way of revealing community identity and engagement patterns in online multi-community spaces.
|
[
"Aggarwal, Jai",
"Diep, Brian",
"Watson, Julia",
"Stevenson, Suzanne"
] |
Investigating Online Community Engagement through Stancetaking
|
findings-emnlp.387
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.388.bib
|
https://aclanthology.org/2023.findings-emnlp.388/
|
@inproceedings{mei-etal-2023-assert,
title = "{ASSERT}: Automated Safety Scenario Red Teaming for Evaluating the Robustness of Large Language Models",
author = "Mei, Alex and
Levy, Sharon and
Wang, William",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.388",
doi = "10.18653/v1/2023.findings-emnlp.388",
pages = "5831--5847",
abstract = "As large language models are integrated into society, robustness toward a suite of prompts is increasingly important to maintain reliability in a high-variance environment.Robustness evaluations must comprehensively encapsulate the various settings in which a user may invoke an intelligent system. This paper proposes ASSERT, Automated Safety Scenario Red Teaming, consisting of three methods {--} semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection. For robust safety evaluation, we apply these methods in the critical domain of AI safety to algorithmically generate a test suite of prompts covering diverse robustness settings {--} semantic equivalence, related scenarios, and adversarial. We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance. Despite dedicated safeguards in existing state-of-the-art models, we find statistically significant performance differences of up to 11{\%} in absolute classification accuracy among semantically related scenarios and error rates of up to 19{\%} absolute error in zero-shot adversarial settings, raising concerns for users{'} physical safety.",
}
|
As large language models are integrated into society, robustness toward a suite of prompts is increasingly important to maintain reliability in a high-variance environment.Robustness evaluations must comprehensively encapsulate the various settings in which a user may invoke an intelligent system. This paper proposes ASSERT, Automated Safety Scenario Red Teaming, consisting of three methods {--} semantically aligned augmentation, target bootstrapping, and adversarial knowledge injection. For robust safety evaluation, we apply these methods in the critical domain of AI safety to algorithmically generate a test suite of prompts covering diverse robustness settings {--} semantic equivalence, related scenarios, and adversarial. We partition our prompts into four safety domains for a fine-grained analysis of how the domain affects model performance. Despite dedicated safeguards in existing state-of-the-art models, we find statistically significant performance differences of up to 11{\%} in absolute classification accuracy among semantically related scenarios and error rates of up to 19{\%} absolute error in zero-shot adversarial settings, raising concerns for users{'} physical safety.
|
[
"Mei, Alex",
"Levy, Sharon",
"Wang, William"
] |
ASSERT: Automated Safety Scenario Red Teaming for Evaluating the Robustness of Large Language Models
|
findings-emnlp.388
|
2310.09624
|
[
"https://github.com/alexmeigz/assert"
] |
https://huggingface.co/papers/2310.09624
| 0 | 0 | 0 | 3 |
[] |
[] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.389.bib
|
https://aclanthology.org/2023.findings-emnlp.389/
|
@inproceedings{tang-etal-2023-learning-correct,
title = "Learning to Correct Noisy Labels for Fine-Grained Entity Typing via Co-Prediction Prompt Tuning",
author = "Tang, Minghao and
He, Yongquan and
Xu, Yongxiu and
Xu, Hongbo and
Zhang, Wenyuan and
Lin, Yang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.389",
doi = "10.18653/v1/2023.findings-emnlp.389",
pages = "5848--5858",
abstract = "Fine-grained entity typing (FET) is an essential task in natural language processing that aims to assign semantic types to entities in text. However, FET poses a major challenge known as the noise labeling problem, whereby current methods rely on estimating noise distribution to identify noisy labels but are confused by diverse noise distribution deviation. To address this limitation, we introduce Co-Prediction Prompt Tuning for noise correction in FET, which leverages multiple prediction results to identify and correct noisy labels. Specifically, we integrate prediction results to recall labeled labels and utilize a differentiated margin to identify inaccurate labels. Moreover, we design an optimization objective concerning divergent co-predictions during fine-tuning, ensuring that the model captures sufficient information and maintains robustness in noise identification. Experimental results on three widely-used FET datasets demonstrate that our noise correction approach significantly enhances the quality of various types of training samples, including those annotated using distant supervision, ChatGPT, and crowdsourcing.",
}
|
Fine-grained entity typing (FET) is an essential task in natural language processing that aims to assign semantic types to entities in text. However, FET poses a major challenge known as the noise labeling problem, whereby current methods rely on estimating noise distribution to identify noisy labels but are confused by diverse noise distribution deviation. To address this limitation, we introduce Co-Prediction Prompt Tuning for noise correction in FET, which leverages multiple prediction results to identify and correct noisy labels. Specifically, we integrate prediction results to recall labeled labels and utilize a differentiated margin to identify inaccurate labels. Moreover, we design an optimization objective concerning divergent co-predictions during fine-tuning, ensuring that the model captures sufficient information and maintains robustness in noise identification. Experimental results on three widely-used FET datasets demonstrate that our noise correction approach significantly enhances the quality of various types of training samples, including those annotated using distant supervision, ChatGPT, and crowdsourcing.
|
[
"Tang, Minghao",
"He, Yongquan",
"Xu, Yongxiu",
"Xu, Hongbo",
"Zhang, Wenyuan",
"Lin, Yang"
] |
Learning to Correct Noisy Labels for Fine-Grained Entity Typing via Co-Prediction Prompt Tuning
|
findings-emnlp.389
|
2310.14596
|
[
"https://github.com/mhtang1995/cppt"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.390.bib
|
https://aclanthology.org/2023.findings-emnlp.390/
|
@inproceedings{dong-etal-2023-co2pt,
title = "{C}o$^2${PT}: Mitigating Bias in Pre-trained Language Models through Counterfactual Contrastive Prompt Tuning",
author = "Dong, Xiangjue and
Zhu, Ziwei and
Wang, Zhuoer and
Teleki, Maria and
Caverlee, James",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.390",
doi = "10.18653/v1/2023.findings-emnlp.390",
pages = "5859--5871",
abstract = "Pre-trained Language Models are widely used in many important real-world applications. However, recent studies show that these models can encode social biases from large pre-training corpora and even amplify biases in downstream applications. To address this challenge, we propose Co$^2$PT, an efficient and effective *debias-while-prompt tuning* method for mitigating biases via counterfactual contrastive prompt tuning on downstream tasks. Our experiments conducted on three extrinsic bias benchmarks demonstrate the effectiveness of Co$^2$PT on bias mitigation during the prompt tuning process and its adaptability to existing upstream debiased language models. These findings indicate the strength of Co$^2$PT and provide promising avenues for further enhancement in bias mitigation on downstream tasks.",
}
|
Pre-trained Language Models are widely used in many important real-world applications. However, recent studies show that these models can encode social biases from large pre-training corpora and even amplify biases in downstream applications. To address this challenge, we propose Co$^2$PT, an efficient and effective *debias-while-prompt tuning* method for mitigating biases via counterfactual contrastive prompt tuning on downstream tasks. Our experiments conducted on three extrinsic bias benchmarks demonstrate the effectiveness of Co$^2$PT on bias mitigation during the prompt tuning process and its adaptability to existing upstream debiased language models. These findings indicate the strength of Co$^2$PT and provide promising avenues for further enhancement in bias mitigation on downstream tasks.
|
[
"Dong, Xiangjue",
"Zhu, Ziwei",
"Wang, Zhuoer",
"Teleki, Maria",
"Caverlee, James"
] |
Co^2PT: Mitigating Bias in Pre-trained Language Models through Counterfactual Contrastive Prompt Tuning
|
findings-emnlp.390
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.391.bib
|
https://aclanthology.org/2023.findings-emnlp.391/
|
@inproceedings{shen-etal-2023-hierarchical,
title = "A Hierarchical Encoding-Decoding Scheme for Abstractive Multi-document Summarization",
author = "Shen, Chenhui and
Cheng, Liying and
Nguyen, Xuan-Phi and
You, Yang and
Bing, Lidong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.391",
doi = "10.18653/v1/2023.findings-emnlp.391",
pages = "5872--5887",
abstract = "Pre-trained language models (PLMs) have achieved outstanding achievements in abstractive single-document summarization (SDS). However, such benefits may not fully extend to multi-document summarization (MDS), where the handling of cross-document information is more complex. Previous works either design new MDS architectures or apply PLMs bluntly with concatenated source documents as a reformulated SDS task. While the former does not utilize previous pre-training efforts and may not generalize well across different domains, the latter may not sufficiently attend to the intricate cross-document relationships unique to MDS tasks. Instead, we enforce hierarchy on both the encoder and decoder to better utilize a PLM to facilitate multi-document interactions for the MDS task. Across 10 MDS benchmarks from various domains, our method outperforms or is competitive with the previous best models, including those with additional MDS pre-training or with more parameters. It outperforms its corresponding PLM backbone by up to 3 Rouge-L and is favored by humans.",
}
|
Pre-trained language models (PLMs) have achieved outstanding achievements in abstractive single-document summarization (SDS). However, such benefits may not fully extend to multi-document summarization (MDS), where the handling of cross-document information is more complex. Previous works either design new MDS architectures or apply PLMs bluntly with concatenated source documents as a reformulated SDS task. While the former does not utilize previous pre-training efforts and may not generalize well across different domains, the latter may not sufficiently attend to the intricate cross-document relationships unique to MDS tasks. Instead, we enforce hierarchy on both the encoder and decoder to better utilize a PLM to facilitate multi-document interactions for the MDS task. Across 10 MDS benchmarks from various domains, our method outperforms or is competitive with the previous best models, including those with additional MDS pre-training or with more parameters. It outperforms its corresponding PLM backbone by up to 3 Rouge-L and is favored by humans.
|
[
"Shen, Chenhui",
"Cheng, Liying",
"Nguyen, Xuan-Phi",
"You, Yang",
"Bing, Lidong"
] |
A Hierarchical Encoding-Decoding Scheme for Abstractive Multi-document Summarization
|
findings-emnlp.391
|
2305.08503
|
[
"https://github.com/damo-nlp-sg/hierencdec"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.392.bib
|
https://aclanthology.org/2023.findings-emnlp.392/
|
@inproceedings{kim-etal-2023-universal,
title = "Universal Domain Adaptation for Robust Handling of Distributional Shifts in {NLP}",
author = "Kim, Hyuhng and
Cho, Hyunsoo and
Lee, Sang-Woo and
Kim, Junyeob and
Park, Choonghyun and
Lee, Sang-goo and
Yoo, Kang and
Kim, Taeuk",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.392",
doi = "10.18653/v1/2023.findings-emnlp.392",
pages = "5888--5905",
abstract = "When deploying machine learning systems to the wild, it is highly desirable for them to effectively leverage prior knowledge to the unfamiliar domain while also firing alarms to anomalous inputs. In order to address these requirements, Universal Domain Adaptation (UniDA) has emerged as a novel research area in computer vision, focusing on achieving both adaptation ability and robustness (i.e., the ability to detect out-of-distribution samples). While UniDA has led significant progress in computer vision, its application on language input still needs to be explored despite its feasibility. In this paper, we propose a comprehensive benchmark for natural language that offers thorough viewpoints of the model{'}s generalizability and robustness. Our benchmark encompasses multiple datasets with varying difficulty levels and characteristics, including temporal shifts and diverse domains. On top of our testbed, we validate existing UniDA methods from computer vision and state-of-the-art domain adaptation techniques from NLP literature, yielding valuable findings: We observe that UniDA methods originally designed for image input can be effectively transferred to the natural language domain while also underscoring the effect of adaptation difficulty in determining the model{'}s performance.",
}
|
When deploying machine learning systems to the wild, it is highly desirable for them to effectively leverage prior knowledge to the unfamiliar domain while also firing alarms to anomalous inputs. In order to address these requirements, Universal Domain Adaptation (UniDA) has emerged as a novel research area in computer vision, focusing on achieving both adaptation ability and robustness (i.e., the ability to detect out-of-distribution samples). While UniDA has led significant progress in computer vision, its application on language input still needs to be explored despite its feasibility. In this paper, we propose a comprehensive benchmark for natural language that offers thorough viewpoints of the model{'}s generalizability and robustness. Our benchmark encompasses multiple datasets with varying difficulty levels and characteristics, including temporal shifts and diverse domains. On top of our testbed, we validate existing UniDA methods from computer vision and state-of-the-art domain adaptation techniques from NLP literature, yielding valuable findings: We observe that UniDA methods originally designed for image input can be effectively transferred to the natural language domain while also underscoring the effect of adaptation difficulty in determining the model{'}s performance.
|
[
"Kim, Hyuhng",
"Cho, Hyunsoo",
"Lee, Sang-Woo",
"Kim, Junyeob",
"Park, Choonghyun",
"Lee, Sang-goo",
"Yoo, Kang",
"Kim, Taeuk"
] |
Universal Domain Adaptation for Robust Handling of Distributional Shifts in NLP
|
findings-emnlp.392
|
2310.14849
|
[
"https://github.com/heyjoonkim/universal_domain_adaptation_for_nlp"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.393.bib
|
https://aclanthology.org/2023.findings-emnlp.393/
|
@inproceedings{hwang-etal-2023-aligning,
title = "Aligning Language Models to User Opinions",
author = "Hwang, EunJeong and
Majumder, Bodhisattwa and
Tandon, Niket",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.393",
doi = "10.18653/v1/2023.findings-emnlp.393",
pages = "5906--5919",
abstract = "An important aspect of developing LLMs that interact with humans is to align models{'} behavior to their users. It is possible to prompt an LLM into behaving as a certain persona, especially a user group or ideological persona the model captured during its pertaining stage. But, how to best align an LLM with a specific user and not a demographic or ideological group remains an open question. Mining public opinion surveys (by PEW research), we find that the opinions of a user and their demographics and ideologies are not mutual predictors. We use this insight to align LLMs by modeling relevant past user opinions in addition to user demographics and ideology, achieving up to 7 points accuracy gains in predicting public opinions from survey questions across a broad set of topics. Our work opens up the research avenues to bring user opinions as an important ingredient in aligning language models.",
}
|
An important aspect of developing LLMs that interact with humans is to align models{'} behavior to their users. It is possible to prompt an LLM into behaving as a certain persona, especially a user group or ideological persona the model captured during its pertaining stage. But, how to best align an LLM with a specific user and not a demographic or ideological group remains an open question. Mining public opinion surveys (by PEW research), we find that the opinions of a user and their demographics and ideologies are not mutual predictors. We use this insight to align LLMs by modeling relevant past user opinions in addition to user demographics and ideology, achieving up to 7 points accuracy gains in predicting public opinions from survey questions across a broad set of topics. Our work opens up the research avenues to bring user opinions as an important ingredient in aligning language models.
|
[
"Hwang, EunJeong",
"Majumder, Bodhisattwa",
"T",
"on, Niket"
] |
Aligning Language Models to User Opinions
|
findings-emnlp.393
|
2305.14929
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.394.bib
|
https://aclanthology.org/2023.findings-emnlp.394/
|
@inproceedings{zhao-etal-2023-ccsrd,
title = "{CCSRD}: Content-Centric Speech Representation Disentanglement Learning for End-to-End Speech Translation",
author = "Zhao, Xiaohu and
Sun, Haoran and
Lei, Yikun and
Zhu, Shaolin and
Xiong, Deyi",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.394",
doi = "10.18653/v1/2023.findings-emnlp.394",
pages = "5920--5932",
abstract = "Deep neural networks have demonstrated their capacity in extracting features from speech inputs. However, these features may include non-linguistic speech factors such as timbre and speaker identity, which are not directly related to translation. In this paper, we propose a content-centric speech representation disentanglement learning framework for speech translation, CCSRD, which decomposes speech representations into content representations and non-linguistic representations via representation disentanglement learning. CCSRD consists of a content encoder that encodes linguistic content information from the speech input, a non-content encoder that models non-linguistic speech features, and a disentanglement module that learns disentangled representations with a cyclic reconstructor, feature reconstructor and speaker classifier trained in a multi-task learning way. Experiments on the MuST-C benchmark dataset demonstrate that CCSRD achieves an average improvement of +0.9 BLEU in two settings across five translation directions over the baseline, outperforming state-of-the-art end-to-end speech translation models and cascaded models.",
}
|
Deep neural networks have demonstrated their capacity in extracting features from speech inputs. However, these features may include non-linguistic speech factors such as timbre and speaker identity, which are not directly related to translation. In this paper, we propose a content-centric speech representation disentanglement learning framework for speech translation, CCSRD, which decomposes speech representations into content representations and non-linguistic representations via representation disentanglement learning. CCSRD consists of a content encoder that encodes linguistic content information from the speech input, a non-content encoder that models non-linguistic speech features, and a disentanglement module that learns disentangled representations with a cyclic reconstructor, feature reconstructor and speaker classifier trained in a multi-task learning way. Experiments on the MuST-C benchmark dataset demonstrate that CCSRD achieves an average improvement of +0.9 BLEU in two settings across five translation directions over the baseline, outperforming state-of-the-art end-to-end speech translation models and cascaded models.
|
[
"Zhao, Xiaohu",
"Sun, Haoran",
"Lei, Yikun",
"Zhu, Shaolin",
"Xiong, Deyi"
] |
CCSRD: Content-Centric Speech Representation Disentanglement Learning for End-to-End Speech Translation
|
findings-emnlp.394
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.395.bib
|
https://aclanthology.org/2023.findings-emnlp.395/
|
@inproceedings{lu-etal-2023-miracle,
title = "Miracle: Towards Personalized Dialogue Generation with Latent-Space Multiple Personal Attribute Control",
author = "Lu, Zhenyi and
Wei, Wei and
Qu, Xiaoye and
Mao, Xian-Ling and
Chen, Dangyang and
Chen, Jixiong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.395",
doi = "10.18653/v1/2023.findings-emnlp.395",
pages = "5933--5957",
abstract = "Personalized dialogue systems aim to endow the chatbot agent with more anthropomorphic traits for human-like interactions. Previous approaches have explored explicitly user profile modeling using text descriptions, implicit derivation of user embeddings, or utilizing handicraft prompts for ChatGPT-like models. However, textual personas are limited in describing multi-faceted attributes (\textit{e.g.}, \textit{language style, inner character nuances}), implicit embedding suffers from personality sparsity, and handicraft prompts lack fine-grained and stable controllability. Hence, these approaches may struggle with complex personalized dialogue generation tasks that require generating controllable responses with multiple personal attributes. To this end, we propose \textbf{Miracle}, a novel personalized dialogue generation method through \textbf{M}ult\textbf{I}ple Pe\textbf{R}sonal \textbf{A}ttributes \textbf{C}ontrol within \textbf{L}atent-Space \textbf{E}nergy-based Models. ttributes \textbf{C}ontrol within \textbf{L}atent-Space \textbf{E}nergy-based Models. Specifically, our approach first disentangles complex personality into multi-faceted attributes. Subsequently, we employ a conditional variational auto-encoder to align with the dense personalized responses within a latent joint attribute space. We have also tailored a dedicated energy function and customized the ordinary differential equations sampling method to offer flexible attribute composition and precise attribute control. Extensive experiments demonstrate that Miracle outperforms several strong baselines in terms of personality controllability and response generation quality. Our dataset and code are available at \url{https://github.com/LZY-the-boys/MIRACLE}",
}
|
Personalized dialogue systems aim to endow the chatbot agent with more anthropomorphic traits for human-like interactions. Previous approaches have explored explicitly user profile modeling using text descriptions, implicit derivation of user embeddings, or utilizing handicraft prompts for ChatGPT-like models. However, textual personas are limited in describing multi-faceted attributes (\textit{e.g.}, \textit{language style, inner character nuances}), implicit embedding suffers from personality sparsity, and handicraft prompts lack fine-grained and stable controllability. Hence, these approaches may struggle with complex personalized dialogue generation tasks that require generating controllable responses with multiple personal attributes. To this end, we propose \textbf{Miracle}, a novel personalized dialogue generation method through \textbf{M}ult\textbf{I}ple Pe\textbf{R}sonal \textbf{A}ttributes \textbf{C}ontrol within \textbf{L}atent-Space \textbf{E}nergy-based Models. ttributes \textbf{C}ontrol within \textbf{L}atent-Space \textbf{E}nergy-based Models. Specifically, our approach first disentangles complex personality into multi-faceted attributes. Subsequently, we employ a conditional variational auto-encoder to align with the dense personalized responses within a latent joint attribute space. We have also tailored a dedicated energy function and customized the ordinary differential equations sampling method to offer flexible attribute composition and precise attribute control. Extensive experiments demonstrate that Miracle outperforms several strong baselines in terms of personality controllability and response generation quality. Our dataset and code are available at \url{https://github.com/LZY-the-boys/MIRACLE}
|
[
"Lu, Zhenyi",
"Wei, Wei",
"Qu, Xiaoye",
"Mao, Xian-Ling",
"Chen, Dangyang",
"Chen, Jixiong"
] |
Miracle: Towards Personalized Dialogue Generation with Latent-Space Multiple Personal Attribute Control
|
findings-emnlp.395
|
2310.18342
|
[
"https://github.com/lzy-the-boys/miracle"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.396.bib
|
https://aclanthology.org/2023.findings-emnlp.396/
|
@inproceedings{okabe-yvon-2023-towards,
title = "Towards Multilingual Interlinear Morphological Glossing",
author = "Okabe, Shu and
Yvon, Fran{\c{c}}ois",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.396",
doi = "10.18653/v1/2023.findings-emnlp.396",
pages = "5958--5971",
abstract = "Interlinear Morphological Glosses are annotations produced in the context of language documentation. Their goal is to identify morphs occurring in an L1 sentence and to explicit their function and meaning, with the further support of an associated translation in L2. We study here the task of automatic glossing, aiming to provide linguists with adequate tools to facilitate this process. Our formalisation of glossing uses a latent variable Conditional Random Field (CRF), which labels the L1 morphs while simultaneously aligning them to L2 words. In experiments with several under-resourced languages, we show that this approach is both effective and data-efficient and mitigates the problem of annotating unknown morphs. We also discuss various design choices regarding the alignment process and the selection of features. We finally demonstrate that it can benefit from multilingual (pre-)training, achieving results which outperform very strong baselines.",
}
|
Interlinear Morphological Glosses are annotations produced in the context of language documentation. Their goal is to identify morphs occurring in an L1 sentence and to explicit their function and meaning, with the further support of an associated translation in L2. We study here the task of automatic glossing, aiming to provide linguists with adequate tools to facilitate this process. Our formalisation of glossing uses a latent variable Conditional Random Field (CRF), which labels the L1 morphs while simultaneously aligning them to L2 words. In experiments with several under-resourced languages, we show that this approach is both effective and data-efficient and mitigates the problem of annotating unknown morphs. We also discuss various design choices regarding the alignment process and the selection of features. We finally demonstrate that it can benefit from multilingual (pre-)training, achieving results which outperform very strong baselines.
|
[
"Okabe, Shu",
"Yvon, Fran{\\c{c}}ois"
] |
Towards Multilingual Interlinear Morphological Glossing
|
findings-emnlp.396
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.397.bib
|
https://aclanthology.org/2023.findings-emnlp.397/
|
@inproceedings{chi-etal-2023-transformer,
title = "Transformer Working Memory Enables Regular Language Reasoning And Natural Language Length Extrapolation",
author = "Chi, Ta-Chung and
Fan, Ting-Han and
Rudnicky, Alexander and
Ramadge, Peter",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.397",
doi = "10.18653/v1/2023.findings-emnlp.397",
pages = "5972--5984",
abstract = "Unlike recurrent models, conventional wisdom has it that Transformers cannot perfectly model regular languages. Inspired by the notion of working memory, we propose a new Transformer variant named RegularGPT. With its novel combination of Weight-Sharing, Adaptive-Depth, and Sliding-Dilated-Attention, RegularGPT constructs working memory along the depth dimension, thereby enabling efficient and successful modeling of regular languages such as PARITY. We further test RegularGPT on the task of natural language length extrapolation and surprisingly find that it rediscovers the local windowed attention effect deemed necessary in prior work for length extrapolation.",
}
|
Unlike recurrent models, conventional wisdom has it that Transformers cannot perfectly model regular languages. Inspired by the notion of working memory, we propose a new Transformer variant named RegularGPT. With its novel combination of Weight-Sharing, Adaptive-Depth, and Sliding-Dilated-Attention, RegularGPT constructs working memory along the depth dimension, thereby enabling efficient and successful modeling of regular languages such as PARITY. We further test RegularGPT on the task of natural language length extrapolation and surprisingly find that it rediscovers the local windowed attention effect deemed necessary in prior work for length extrapolation.
|
[
"Chi, Ta-Chung",
"Fan, Ting-Han",
"Rudnicky, Alex",
"er",
"Ramadge, Peter"
] |
Transformer Working Memory Enables Regular Language Reasoning And Natural Language Length Extrapolation
|
findings-emnlp.397
|
2305.03796
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.398.bib
|
https://aclanthology.org/2023.findings-emnlp.398/
|
@inproceedings{ye-etal-2023-enhancing,
title = "Enhancing Conversational Search: Large Language Model-Aided Informative Query Rewriting",
author = "Ye, Fanghua and
Fang, Meng and
Li, Shenghui and
Yilmaz, Emine",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.398",
doi = "10.18653/v1/2023.findings-emnlp.398",
pages = "5985--6006",
abstract = "Query rewriting plays a vital role in enhancing conversational search by transforming context-dependent user queries into standalone forms. Existing approaches primarily leverage human-rewritten queries as labels to train query rewriting models. However, human rewrites may lack sufficient information for optimal retrieval performance. To overcome this limitation, we propose utilizing large language models (LLMs) as query rewriters, enabling the generation of informative query rewrites through well-designed instructions. We define four essential properties for well-formed rewrites and incorporate all of them into the instruction. In addition, we introduce the role of rewrite editors for LLMs when initial query rewrites are available, forming a {``}rewrite-then-edit{''} process. Furthermore, we propose distilling the rewriting capabilities of LLMs into smaller models to reduce rewriting latency. Our experimental evaluation on the QReCC dataset demonstrates that informative query rewrites can yield substantially improved retrieval performance compared to human rewrites, especially with sparse retrievers.",
}
|
Query rewriting plays a vital role in enhancing conversational search by transforming context-dependent user queries into standalone forms. Existing approaches primarily leverage human-rewritten queries as labels to train query rewriting models. However, human rewrites may lack sufficient information for optimal retrieval performance. To overcome this limitation, we propose utilizing large language models (LLMs) as query rewriters, enabling the generation of informative query rewrites through well-designed instructions. We define four essential properties for well-formed rewrites and incorporate all of them into the instruction. In addition, we introduce the role of rewrite editors for LLMs when initial query rewrites are available, forming a {``}rewrite-then-edit{''} process. Furthermore, we propose distilling the rewriting capabilities of LLMs into smaller models to reduce rewriting latency. Our experimental evaluation on the QReCC dataset demonstrates that informative query rewrites can yield substantially improved retrieval performance compared to human rewrites, especially with sparse retrievers.
|
[
"Ye, Fanghua",
"Fang, Meng",
"Li, Shenghui",
"Yilmaz, Emine"
] |
Enhancing Conversational Search: Large Language Model-Aided Informative Query Rewriting
|
findings-emnlp.398
|
2310.09716
|
[
"https://github.com/smartyfh/infocqr"
] |
https://huggingface.co/papers/2310.09716
| 0 | 0 | 1 | 4 |
[] |
[] |
[] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.399.bib
|
https://aclanthology.org/2023.findings-emnlp.399/
|
@inproceedings{li-etal-2023-distilling,
title = "Distilling {C}hat{GPT} for Explainable Automated Student Answer Assessment",
author = "Li, Jiazheng and
Gui, Lin and
Zhou, Yuxiang and
West, David and
Aloisi, Cesare and
He, Yulan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.399",
doi = "10.18653/v1/2023.findings-emnlp.399",
pages = "6007--6026",
abstract = "Providing explainable and faithful feedback is crucial for automated student answer assessment. In this paper, we introduce a novel framework that explores using ChatGPT, a cutting-edge large language model, for the concurrent tasks of student answer scoring and rationale generation. We identify the appropriate instructions by prompting ChatGPT with different templates to collect the rationales, where inconsistent rationales are refined to align with marking standards. The refined ChatGPT outputs enable us to fine-tune a smaller language model that simultaneously assesses student answers and provides rationales. Extensive experiments on the benchmark dataset show that the proposed method improves the overall QWK score by 11{\%} compared to ChatGPT. Furthermore, our thorough analysis and human evaluation demonstrate that the rationales generated by our proposed method are comparable to those of ChatGPT. Our approach provides a viable solution to achieve explainable automated assessment in education",
}
|
Providing explainable and faithful feedback is crucial for automated student answer assessment. In this paper, we introduce a novel framework that explores using ChatGPT, a cutting-edge large language model, for the concurrent tasks of student answer scoring and rationale generation. We identify the appropriate instructions by prompting ChatGPT with different templates to collect the rationales, where inconsistent rationales are refined to align with marking standards. The refined ChatGPT outputs enable us to fine-tune a smaller language model that simultaneously assesses student answers and provides rationales. Extensive experiments on the benchmark dataset show that the proposed method improves the overall QWK score by 11{\%} compared to ChatGPT. Furthermore, our thorough analysis and human evaluation demonstrate that the rationales generated by our proposed method are comparable to those of ChatGPT. Our approach provides a viable solution to achieve explainable automated assessment in education
|
[
"Li, Jiazheng",
"Gui, Lin",
"Zhou, Yuxiang",
"West, David",
"Aloisi, Cesare",
"He, Yulan"
] |
Distilling ChatGPT for Explainable Automated Student Answer Assessment
|
findings-emnlp.399
|
2305.12962
|
[
"https://github.com/lijiazheng99/aera"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.400.bib
|
https://aclanthology.org/2023.findings-emnlp.400/
|
@inproceedings{li-etal-2023-grammatical,
title = "Grammatical Error Correction via Mixed-Grained Weighted Training",
author = "Li, Jiahao and
Wang, Quan and
Zhu, Chiwei and
Mao, Zhendong and
Zhang, Yongdong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.400",
doi = "10.18653/v1/2023.findings-emnlp.400",
pages = "6027--6037",
abstract = "The task of Grammatical Error Correction (GEC) aims to automatically correct grammatical errors in natural texts. Almost all previous works treat annotated training data equally, but inherent discrepancies in data are neglected. In this paper, the inherent discrepancies are manifested in two aspects, namely, accuracy of data annotation and diversity of potential annotations. To this end, we propose MainGEC, which designs token-level and sentence-level training weights based on inherent discrepancies therein, and then conducts mixed-grained weighted training to improve the training effect for GEC. Empirical evaluation shows that whether in the Seq2Seq or Seq2Edit manner, MainGEC achieves consistent and significant performance improvements on two benchmark datasets, demonstrating the effectiveness and superiority of the mixed-grained weighted training. Further ablation experiments verify the effectiveness of designed weights for both granularities in MainGEC.",
}
|
The task of Grammatical Error Correction (GEC) aims to automatically correct grammatical errors in natural texts. Almost all previous works treat annotated training data equally, but inherent discrepancies in data are neglected. In this paper, the inherent discrepancies are manifested in two aspects, namely, accuracy of data annotation and diversity of potential annotations. To this end, we propose MainGEC, which designs token-level and sentence-level training weights based on inherent discrepancies therein, and then conducts mixed-grained weighted training to improve the training effect for GEC. Empirical evaluation shows that whether in the Seq2Seq or Seq2Edit manner, MainGEC achieves consistent and significant performance improvements on two benchmark datasets, demonstrating the effectiveness and superiority of the mixed-grained weighted training. Further ablation experiments verify the effectiveness of designed weights for both granularities in MainGEC.
|
[
"Li, Jiahao",
"Wang, Quan",
"Zhu, Chiwei",
"Mao, Zhendong",
"Zhang, Yongdong"
] |
Grammatical Error Correction via Mixed-Grained Weighted Training
|
findings-emnlp.400
|
2311.13848
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.401.bib
|
https://aclanthology.org/2023.findings-emnlp.401/
|
@inproceedings{sheng-etal-2023-unified,
title = "A Unified Framework for Synaesthesia Analysis",
author = "Sheng, Kun and
Wang, Zhongqing and
Zhao, Qingqing and
Jiang, Xiaotong and
Zhou, Guodong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.401",
doi = "10.18653/v1/2023.findings-emnlp.401",
pages = "6038--6048",
abstract = "Synaesthesia refers to the description of perceptions in one sensory modality through concepts from other modalities. It involves not only a linguistic phenomenon, but also a cognitive phenomenon structuring human thought and action, which makes understanding it challenging. As a means of cognition, synaesthesia is rendered by more than sensory modalities, cue and stimulus can also play an important role in expressing and understanding it. In addition, understanding synaesthesia involves many cognitive efforts, such as identifying the semantic relationship between sensory words and modalities. Therefore, we propose a unified framework focusing on annotating all kinds of synaesthetic elements and fully exploring the relationship among them. In particular, we introduce a new annotation scheme, including sensory modalities as well as their cues and stimuli, which facilitate understanding synaesthetic information collectively. We further design a structure generation model to capture the relations among synaesthetic elements and generate them jointly. Through extensive experiments, the importance of proposed dataset can be verified by the statistics and progressive performances. In addition, our proposed model yields state-of-the-art results, demonstrating its effectiveness.",
}
|
Synaesthesia refers to the description of perceptions in one sensory modality through concepts from other modalities. It involves not only a linguistic phenomenon, but also a cognitive phenomenon structuring human thought and action, which makes understanding it challenging. As a means of cognition, synaesthesia is rendered by more than sensory modalities, cue and stimulus can also play an important role in expressing and understanding it. In addition, understanding synaesthesia involves many cognitive efforts, such as identifying the semantic relationship between sensory words and modalities. Therefore, we propose a unified framework focusing on annotating all kinds of synaesthetic elements and fully exploring the relationship among them. In particular, we introduce a new annotation scheme, including sensory modalities as well as their cues and stimuli, which facilitate understanding synaesthetic information collectively. We further design a structure generation model to capture the relations among synaesthetic elements and generate them jointly. Through extensive experiments, the importance of proposed dataset can be verified by the statistics and progressive performances. In addition, our proposed model yields state-of-the-art results, demonstrating its effectiveness.
|
[
"Sheng, Kun",
"Wang, Zhongqing",
"Zhao, Qingqing",
"Jiang, Xiaotong",
"Zhou, Guodong"
] |
A Unified Framework for Synaesthesia Analysis
|
findings-emnlp.401
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.402.bib
|
https://aclanthology.org/2023.findings-emnlp.402/
|
@inproceedings{kabra-elenberg-2023-domain,
title = "Domain Private Transformers for Multi-Domain Dialog Systems",
author = "Kabra, Anmol and
Elenberg, Ethan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.402",
doi = "10.18653/v1/2023.findings-emnlp.402",
pages = "6049--6061",
abstract = "Large, general purpose language models have demonstrated impressive performance across many different conversational domains. While multi-domain language models achieve low overall perplexity, their outputs are not guaranteed to stay within the domain of a given input prompt. This paper proposes \textit{domain privacy} as a novel way to quantify how likely a conditional language model will leak across domains. We also develop policy functions based on token-level domain classification, and propose an efficient fine-tuning method to improve the trained model{'}s domain privacy. Experiments on membership inference attacks show that our proposed method has comparable resiliency to methods adapted from recent literature on differentially private language models.",
}
|
Large, general purpose language models have demonstrated impressive performance across many different conversational domains. While multi-domain language models achieve low overall perplexity, their outputs are not guaranteed to stay within the domain of a given input prompt. This paper proposes \textit{domain privacy} as a novel way to quantify how likely a conditional language model will leak across domains. We also develop policy functions based on token-level domain classification, and propose an efficient fine-tuning method to improve the trained model{'}s domain privacy. Experiments on membership inference attacks show that our proposed method has comparable resiliency to methods adapted from recent literature on differentially private language models.
|
[
"Kabra, Anmol",
"Elenberg, Ethan"
] |
Domain Private Transformers for Multi-Domain Dialog Systems
|
findings-emnlp.402
|
2305.14208
|
[
"https://github.com/asappresearch/domain-private-transformers"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.403.bib
|
https://aclanthology.org/2023.findings-emnlp.403/
|
@inproceedings{yang-li-2023-visual,
title = "Visual Elements Mining as Prompts for Instruction Learning for Target-Oriented Multimodal Sentiment Classification",
author = "Yang, Bin and
Li, Jinlong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.403",
doi = "10.18653/v1/2023.findings-emnlp.403",
pages = "6062--6075",
abstract = "Target-oriented Multimodal Sentiment Classification (TMSC) aims to incorporate visual modality with text modality to identify the sentiment polarity towards a specific target within a sentence. To address this task, we propose a Visual Elements Mining as Prompts (VEMP) method, which describes the semantic information of visual elements with Text Symbols Embedded in the Image (TSEI), Target-aware Adjective-Noun Pairs (TANPs) and image scene caption, and then transform them into prompts for instruction learning of the model Tk-Instruct. In our VEMP, the text symbols embedded in the image may contain the textual descriptions of fine-grained visual elements, and are extracted as input TSEI; we extract adjective-noun pairs from the image and align them with the target to obtain TANPs, in which the adjectives provide emotional embellishments for the relevant target; finally, to effectively fuse these visual elements with text modality for sentiment prediction, we integrate them to construct instruction prompts for instruction-tuning Tk-Instruct which possesses powerful learning capabilities under instructions. Extensive experimental results show that our method achieves state-of-the-art performance on two benchmark datasets. And further analysis demonstrates the effectiveness of each component of our method.",
}
|
Target-oriented Multimodal Sentiment Classification (TMSC) aims to incorporate visual modality with text modality to identify the sentiment polarity towards a specific target within a sentence. To address this task, we propose a Visual Elements Mining as Prompts (VEMP) method, which describes the semantic information of visual elements with Text Symbols Embedded in the Image (TSEI), Target-aware Adjective-Noun Pairs (TANPs) and image scene caption, and then transform them into prompts for instruction learning of the model Tk-Instruct. In our VEMP, the text symbols embedded in the image may contain the textual descriptions of fine-grained visual elements, and are extracted as input TSEI; we extract adjective-noun pairs from the image and align them with the target to obtain TANPs, in which the adjectives provide emotional embellishments for the relevant target; finally, to effectively fuse these visual elements with text modality for sentiment prediction, we integrate them to construct instruction prompts for instruction-tuning Tk-Instruct which possesses powerful learning capabilities under instructions. Extensive experimental results show that our method achieves state-of-the-art performance on two benchmark datasets. And further analysis demonstrates the effectiveness of each component of our method.
|
[
"Yang, Bin",
"Li, Jinlong"
] |
Visual Elements Mining as Prompts for Instruction Learning for Target-Oriented Multimodal Sentiment Classification
|
findings-emnlp.403
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.404.bib
|
https://aclanthology.org/2023.findings-emnlp.404/
|
@inproceedings{ko-etal-2023-nash,
title = "{NASH}: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models",
author = "Ko, Jongwoo and
Park, Seungjoon and
Kim, Yujin and
Ahn, Sumyeong and
Chang, Du-Seong and
Ahn, Euijai and
Yun, Se-Young",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.404",
doi = "10.18653/v1/2023.findings-emnlp.404",
pages = "6076--6093",
abstract = "Structured pruning methods have proven effective in reducing the model size and accelerating inference speed in various network architectures such as Transformers. Despite the versatility of encoder-decoder models in numerous NLP tasks, the structured pruning methods on such models are relatively less explored compared to encoder-only models. In this study, we investigate the behavior of the structured pruning of the encoder-decoder models in the decoupled pruning perspective of the encoder and decoder component, respectively. Our findings highlight two insights: (1) the number of decoder layers is the dominant factor of inference speed, and (2) low sparsity in the pruned encoder network enhances generation quality. Motivated by these findings, we propose a simple and effective framework, NASH, that narrows the encoder and shortens the decoder networks of encoder-decoder models. Extensive experiments on diverse generation and inference tasks validate the effectiveness of our method in both speedup and output quality.",
}
|
Structured pruning methods have proven effective in reducing the model size and accelerating inference speed in various network architectures such as Transformers. Despite the versatility of encoder-decoder models in numerous NLP tasks, the structured pruning methods on such models are relatively less explored compared to encoder-only models. In this study, we investigate the behavior of the structured pruning of the encoder-decoder models in the decoupled pruning perspective of the encoder and decoder component, respectively. Our findings highlight two insights: (1) the number of decoder layers is the dominant factor of inference speed, and (2) low sparsity in the pruned encoder network enhances generation quality. Motivated by these findings, we propose a simple and effective framework, NASH, that narrows the encoder and shortens the decoder networks of encoder-decoder models. Extensive experiments on diverse generation and inference tasks validate the effectiveness of our method in both speedup and output quality.
|
[
"Ko, Jongwoo",
"Park, Seungjoon",
"Kim, Yujin",
"Ahn, Sumyeong",
"Chang, Du-Seong",
"Ahn, Euijai",
"Yun, Se-Young"
] |
NASH: A Simple Unified Framework of Structured Pruning for Accelerating Encoder-Decoder Language Models
|
findings-emnlp.404
|
2310.10054
|
[
"https://github.com/jongwooko/nash-pruning-official"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.405.bib
|
https://aclanthology.org/2023.findings-emnlp.405/
|
@inproceedings{peng-etal-2023-gbt,
title = "{GBT}: Generative Boosting Training Approach for Paraphrase Identification",
author = "Peng, Rui and
Jin, Zhiling and
Hong, Yu",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.405",
doi = "10.18653/v1/2023.findings-emnlp.405",
pages = "6094--6103",
abstract = "Paraphrase Identification (PI), a task of determining whether a pair of sentences express the same meaning, is widely applied in Information Retrieval and Question Answering. Data Augmentation (DA) is proven effective in tackling the PI task. However, the majority of DA methods still suffer from two limitations: inefficiency and poor quality. In this study, we propose the Generative Boosting Training (GBT) approach for PI. GBT designs a boosting learning method for a single model based on the human learning process, utilizing seq2seq model to perform DA on misclassified instances periodically. We conduct experiments on the benchmark corpora QQP and LCQMC, towards both English and Chinese PI tasks. Experimental results show that our method yields significant improvements on a variety of Pre-trained Language Model (PLM) based baselines with good efficiency and effectiveness. It is noteworthy that a single BERT model (with a linear classifier) can outperform the state-of-the-art PI models with the boosting of GBT.",
}
|
Paraphrase Identification (PI), a task of determining whether a pair of sentences express the same meaning, is widely applied in Information Retrieval and Question Answering. Data Augmentation (DA) is proven effective in tackling the PI task. However, the majority of DA methods still suffer from two limitations: inefficiency and poor quality. In this study, we propose the Generative Boosting Training (GBT) approach for PI. GBT designs a boosting learning method for a single model based on the human learning process, utilizing seq2seq model to perform DA on misclassified instances periodically. We conduct experiments on the benchmark corpora QQP and LCQMC, towards both English and Chinese PI tasks. Experimental results show that our method yields significant improvements on a variety of Pre-trained Language Model (PLM) based baselines with good efficiency and effectiveness. It is noteworthy that a single BERT model (with a linear classifier) can outperform the state-of-the-art PI models with the boosting of GBT.
|
[
"Peng, Rui",
"Jin, Zhiling",
"Hong, Yu"
] |
GBT: Generative Boosting Training Approach for Paraphrase Identification
|
findings-emnlp.405
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.406.bib
|
https://aclanthology.org/2023.findings-emnlp.406/
|
@inproceedings{zou-etal-2023-decrisismb,
title = "{D}e{C}risis{MB}: Debiased Semi-Supervised Learning for Crisis Tweet Classification via Memory Bank",
author = "Zou, Henry and
Zhou, Yue and
Zhang, Weizhi and
Caragea, Cornelia",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.406",
doi = "10.18653/v1/2023.findings-emnlp.406",
pages = "6104--6115",
abstract = "During crisis events, people often use social media platforms such as Twitter to disseminate information about the situation, warnings, advice, and support. Emergency relief organizations leverage such information to acquire timely crisis circumstances and expedite rescue operations. While existing works utilize such information to build models for crisis event analysis, fully-supervised approaches require annotating vast amounts of data and are impractical due to limited response time. On the other hand, semi-supervised models can be biased, performing moderately well for certain classes while performing extremely poorly for others, resulting in substantially negative effects on disaster monitoring and rescue. In this paper, we first study two recent debiasing methods on semi-supervised crisis tweet classification. Then we propose a simple but effective debiasing method, DeCrisisMB, that utilizes a Memory Bank to store and perform equal sampling for generated pseudo-labels from each class at each training iteration. Extensive experiments are conducted to compare different debiasing methods{'} performance and generalization ability in both in-distribution and out-of-distribution settings. The results demonstrate the superior performance of our proposed method. Our code is available at https://github.com/HenryPengZou/DeCrisisMB.",
}
|
During crisis events, people often use social media platforms such as Twitter to disseminate information about the situation, warnings, advice, and support. Emergency relief organizations leverage such information to acquire timely crisis circumstances and expedite rescue operations. While existing works utilize such information to build models for crisis event analysis, fully-supervised approaches require annotating vast amounts of data and are impractical due to limited response time. On the other hand, semi-supervised models can be biased, performing moderately well for certain classes while performing extremely poorly for others, resulting in substantially negative effects on disaster monitoring and rescue. In this paper, we first study two recent debiasing methods on semi-supervised crisis tweet classification. Then we propose a simple but effective debiasing method, DeCrisisMB, that utilizes a Memory Bank to store and perform equal sampling for generated pseudo-labels from each class at each training iteration. Extensive experiments are conducted to compare different debiasing methods{'} performance and generalization ability in both in-distribution and out-of-distribution settings. The results demonstrate the superior performance of our proposed method. Our code is available at https://github.com/HenryPengZou/DeCrisisMB.
|
[
"Zou, Henry",
"Zhou, Yue",
"Zhang, Weizhi",
"Caragea, Cornelia"
] |
DeCrisisMB: Debiased Semi-Supervised Learning for Crisis Tweet Classification via Memory Bank
|
findings-emnlp.406
|
2310.14577
|
[
"https://github.com/HenryPengZou/DeCrisisMB"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.407.bib
|
https://aclanthology.org/2023.findings-emnlp.407/
|
@inproceedings{roy-etal-2023-probing,
title = "Probing {LLM}s for hate speech detection: strengths and vulnerabilities",
author = "Roy, Sarthak and
Harshvardhan, Ashish and
Mukherjee, Animesh and
Saha, Punyajoy",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.407",
doi = "10.18653/v1/2023.findings-emnlp.407",
pages = "6116--6128",
abstract = "Recently efforts have been made by social media platforms as well as researchers to detect hateful or toxic language using large language models. However, none of these works aim to use explanation, additional context and victim community information in the detection process. We utilise different prompt variation, input information and evaluate large language models in zero shot setting (without adding any in-context examples). We select two large language models (GPT-3.5 and text-davinci) and three datasets - HateXplain, implicit hate and ToxicSpans. We find that on average including the target information in the pipeline improves the model performance substantially ($\sim20-30\%$) over the baseline across the datasets. There is also a considerable effect of adding the rationales/explanations into the pipeline ($\sim10-20\%$) over the baseline across the datasets. In addition, we further provide a typology of the error cases where these large language models fail to (i) classify and (ii) explain the reason for the decisions they take. Such vulnerable points automatically constitute {`}jailbreak{'} prompts for these models and industry scale safeguard techniques need to be developed to make the models robust against such prompts.",
}
|
Recently efforts have been made by social media platforms as well as researchers to detect hateful or toxic language using large language models. However, none of these works aim to use explanation, additional context and victim community information in the detection process. We utilise different prompt variation, input information and evaluate large language models in zero shot setting (without adding any in-context examples). We select two large language models (GPT-3.5 and text-davinci) and three datasets - HateXplain, implicit hate and ToxicSpans. We find that on average including the target information in the pipeline improves the model performance substantially ($\sim20-30\%$) over the baseline across the datasets. There is also a considerable effect of adding the rationales/explanations into the pipeline ($\sim10-20\%$) over the baseline across the datasets. In addition, we further provide a typology of the error cases where these large language models fail to (i) classify and (ii) explain the reason for the decisions they take. Such vulnerable points automatically constitute {`}jailbreak{'} prompts for these models and industry scale safeguard techniques need to be developed to make the models robust against such prompts.
|
[
"Roy, Sarthak",
"Harshvardhan, Ashish",
"Mukherjee, Animesh",
"Saha, Punyajoy"
] |
Probing LLMs for hate speech detection: strengths and vulnerabilities
|
findings-emnlp.407
|
2310.12860
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.408.bib
|
https://aclanthology.org/2023.findings-emnlp.408/
|
@inproceedings{huang-etal-2023-simple,
title = "From Simple to Complex: A Progressive Framework for Document-level Informative Argument Extraction",
author = "Huang, Quzhe and
Zhang, Yanxi and
Zhao, Dongyan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.408",
doi = "10.18653/v1/2023.findings-emnlp.408",
pages = "6129--6140",
abstract = "Document-level Event Argument Extraction (EAE) requires the model to extract arguments of multiple events from a single document. Considering the underlying dependencies between these events, recent efforts leverage the idea of {``}memory{''}, where the results of already predicted events are cached and can be retrieved to help the prediction of upcoming events. These methods extract events according to their appearance order in the document, however, the event that appears in the first sentence does not mean that it is the easiest to extract. Existing methods might introduce noise to the extraction of upcoming events if they rely on an incorrect prediction of previous events. In order to provide more reliable memory, we propose a simple-to-complex progressive framework for document-level EAE. Specifically, we first calculate the difficulty of each event and then, we conduct the extraction following a simple-to-complex order. In this way, the memory will store the most certain results, and the model could use these reliable sources to help the prediction of more difficult events. Experiments on WikiEvents show that our model outperforms SOTA by 1.4{\%} in F1, indicating the proposed simple-to-complex framework is useful in the EAE task.",
}
|
Document-level Event Argument Extraction (EAE) requires the model to extract arguments of multiple events from a single document. Considering the underlying dependencies between these events, recent efforts leverage the idea of {``}memory{''}, where the results of already predicted events are cached and can be retrieved to help the prediction of upcoming events. These methods extract events according to their appearance order in the document, however, the event that appears in the first sentence does not mean that it is the easiest to extract. Existing methods might introduce noise to the extraction of upcoming events if they rely on an incorrect prediction of previous events. In order to provide more reliable memory, we propose a simple-to-complex progressive framework for document-level EAE. Specifically, we first calculate the difficulty of each event and then, we conduct the extraction following a simple-to-complex order. In this way, the memory will store the most certain results, and the model could use these reliable sources to help the prediction of more difficult events. Experiments on WikiEvents show that our model outperforms SOTA by 1.4{\%} in F1, indicating the proposed simple-to-complex framework is useful in the EAE task.
|
[
"Huang, Quzhe",
"Zhang, Yanxi",
"Zhao, Dongyan"
] |
From Simple to Complex: A Progressive Framework for Document-level Informative Argument Extraction
|
findings-emnlp.408
|
2310.16358
|
[
"https://github.com/zhangyx0417/simple_to_complex"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.409.bib
|
https://aclanthology.org/2023.findings-emnlp.409/
|
@inproceedings{zhang-etal-2023-multicmet,
title = "{M}ulti{CMET}: A Novel {C}hinese Benchmark for Understanding Multimodal Metaphor",
author = "Zhang, Dongyu and
Yu, Jingwei and
Jin, Senyuan and
Yang, Liang and
Lin, Hongfei",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.409",
doi = "10.18653/v1/2023.findings-emnlp.409",
pages = "6141--6154",
abstract = "Metaphor is a pervasive aspect of human communication, and its presence in multimodal forms has become more prominent with the progress of mass media. However, there is limited research on multimodal metaphor resources beyond the English language. Furthermore, the existing work in natural language processing does not address the exploration of categorizing the source and target domains in metaphors. This omission is significant considering the extensive research conducted in the fields of cognitive linguistics, which emphasizes that a profound understanding of metaphor relies on recognizing the differences and similarities between domain categories. We, therefore, introduce MultiCMET, a multimodal Chinese metaphor dataset, consisting of 13,820 text-image pairs of advertisements with manual annotations of the occurrence of metaphors, domain categories, and sentiments metaphors convey. We also constructed a domain lexicon that encompasses categorizations of metaphorical source domains and target domains and propose a Cascading Domain Knowledge Integration (CDKI) benchmark to detect metaphors by introducing domain-specific lexical features. Experimental results demonstrate the effectiveness of CDKI. The dataset and code are publicly available.",
}
|
Metaphor is a pervasive aspect of human communication, and its presence in multimodal forms has become more prominent with the progress of mass media. However, there is limited research on multimodal metaphor resources beyond the English language. Furthermore, the existing work in natural language processing does not address the exploration of categorizing the source and target domains in metaphors. This omission is significant considering the extensive research conducted in the fields of cognitive linguistics, which emphasizes that a profound understanding of metaphor relies on recognizing the differences and similarities between domain categories. We, therefore, introduce MultiCMET, a multimodal Chinese metaphor dataset, consisting of 13,820 text-image pairs of advertisements with manual annotations of the occurrence of metaphors, domain categories, and sentiments metaphors convey. We also constructed a domain lexicon that encompasses categorizations of metaphorical source domains and target domains and propose a Cascading Domain Knowledge Integration (CDKI) benchmark to detect metaphors by introducing domain-specific lexical features. Experimental results demonstrate the effectiveness of CDKI. The dataset and code are publicly available.
|
[
"Zhang, Dongyu",
"Yu, Jingwei",
"Jin, Senyuan",
"Yang, Liang",
"Lin, Hongfei"
] |
MultiCMET: A Novel Chinese Benchmark for Understanding Multimodal Metaphor
|
findings-emnlp.409
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.410.bib
|
https://aclanthology.org/2023.findings-emnlp.410/
|
@inproceedings{kargaran-etal-2023-glotlid,
title = "{G}lot{LID}: Language Identification for Low-Resource Languages",
author = "Kargaran, Amir Hossein and
Imani, Ayyoob and
Yvon, Fran{\c{c}}ois and
Schuetze, Hinrich",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.410",
doi = "10.18653/v1/2023.findings-emnlp.410",
pages = "6155--6218",
abstract = "Several recent papers have published good solutions for language identification (LID) for about 300 high-resource and medium-resource languages. However, there is no LID available that (i) covers a wide range of low-resource languages, (ii) is rigorously evaluated and reliable and (iii) efficient and easy to use. Here, we publish GlotLID-M, an LID model that satisfies the desiderata of wide coverage, reliability and efficiency. It identifies 1665 languages, a large increase in coverage compared to prior work. In our experiments, GlotLID-M outperforms four baselines (CLD3, FT176, OpenLID and NLLB) when balancing F1 and false positive rate (FPR). We analyze the unique challenges that low-resource LID poses: incorrect corpus metadata, leakage from high-resource languages, difficulty separating closely related languages, handling of macrolanguage vs varieties and in general noisy data. We hope that integrating GlotLID-M into dataset creation pipelines will improve quality and enhance accessibility of NLP technology for low-resource languages and cultures. GlotLID-M model, code, and list of data sources are available: https://github.com/cisnlp/GlotLID.",
}
|
Several recent papers have published good solutions for language identification (LID) for about 300 high-resource and medium-resource languages. However, there is no LID available that (i) covers a wide range of low-resource languages, (ii) is rigorously evaluated and reliable and (iii) efficient and easy to use. Here, we publish GlotLID-M, an LID model that satisfies the desiderata of wide coverage, reliability and efficiency. It identifies 1665 languages, a large increase in coverage compared to prior work. In our experiments, GlotLID-M outperforms four baselines (CLD3, FT176, OpenLID and NLLB) when balancing F1 and false positive rate (FPR). We analyze the unique challenges that low-resource LID poses: incorrect corpus metadata, leakage from high-resource languages, difficulty separating closely related languages, handling of macrolanguage vs varieties and in general noisy data. We hope that integrating GlotLID-M into dataset creation pipelines will improve quality and enhance accessibility of NLP technology for low-resource languages and cultures. GlotLID-M model, code, and list of data sources are available: https://github.com/cisnlp/GlotLID.
|
[
"Kargaran, Amir Hossein",
"Imani, Ayyoob",
"Yvon, Fran{\\c{c}}ois",
"Schuetze, Hinrich"
] |
GlotLID: Language Identification for Low-Resource Languages
|
findings-emnlp.410
|
2310.16248
|
[
"https://github.com/cisnlp/glotstorybook"
] |
https://huggingface.co/papers/2310.16248
| 1 | 1 | 2 | 4 |
[
"cis-lmu/glotlid"
] |
[
"cis-lmu/GlotStoryBook",
"cis-lmu/GlotSparse",
"cis-lmu/udhr-lid"
] |
[
"cis-lmu/glotlid-space",
"kargaranamir/language-identification",
"nafisehNik/girt-space",
"cis-lmu/MaskLID",
"kargaranamir/LangID-LIME",
"B22530/cis-lmu-glotlid",
"NeerAbhy/Text_analyzer",
"anzorq/glotlid",
"NorHsangPha/fasttext-language-identification"
] | 1 |
Poster
|
https://aclanthology.org/2023.findings-emnlp.411.bib
|
https://aclanthology.org/2023.findings-emnlp.411/
|
@inproceedings{li-qiu-2023-finding,
title = "Finding Support Examples for In-Context Learning",
author = "Li, Xiaonan and
Qiu, Xipeng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.411",
doi = "10.18653/v1/2023.findings-emnlp.411",
pages = "6219--6235",
abstract = "In-context learning is a new learning paradigm where a language model observes a few examples and directly outputs the test input{'}s prediction. Previous works have shown that it is sensitive to the provided examples and randomly sampled examples probably cause inferior performance. In this paper, we propose finding {``}support examples{''} for in-context learning: Given a training dataset, it aims to select one permutation of a few examples, which can well characterize the task for in-context learning and thus lead to superior performance. Although for traditional gradient-based training, there are extensive methods to find a coreset from the entire dataset, they struggle to find important in-context examples, because in-context learning occurs in the language model{'}s forward process without gradients or parameter updates and thus has a significant gap with traditional training. Additionally, the strong dependence among in-context examples makes it an NP-hard combinatorial optimization problem and enumerating all permutations is infeasible. Hence we propose **LENS**, a fi**L**ter-th**EN**-**S**earch method to tackle this challenge in two stages: irst we filter the dataset to obtain individually informative in-context examples. Specifically, we propose a novel metric, InfoScore, to evaluate the example{'}s in-context informativeness based on the language model{'}s feedback, and further propose a progressive filtering process to filter out uninformative examples. Then we propose diversity-guided example search which iteratively refines and evaluates the selected example permutations, to find examples that fully depict the task. The experimental results show that LENS significantly outperforms a wide range of baselines and further analyses show that each component contribute critically to the improvements and shed light on the principles of supporting examples and in-context learning.",
}
|
In-context learning is a new learning paradigm where a language model observes a few examples and directly outputs the test input{'}s prediction. Previous works have shown that it is sensitive to the provided examples and randomly sampled examples probably cause inferior performance. In this paper, we propose finding {``}support examples{''} for in-context learning: Given a training dataset, it aims to select one permutation of a few examples, which can well characterize the task for in-context learning and thus lead to superior performance. Although for traditional gradient-based training, there are extensive methods to find a coreset from the entire dataset, they struggle to find important in-context examples, because in-context learning occurs in the language model{'}s forward process without gradients or parameter updates and thus has a significant gap with traditional training. Additionally, the strong dependence among in-context examples makes it an NP-hard combinatorial optimization problem and enumerating all permutations is infeasible. Hence we propose **LENS**, a fi**L**ter-th**EN**-**S**earch method to tackle this challenge in two stages: irst we filter the dataset to obtain individually informative in-context examples. Specifically, we propose a novel metric, InfoScore, to evaluate the example{'}s in-context informativeness based on the language model{'}s feedback, and further propose a progressive filtering process to filter out uninformative examples. Then we propose diversity-guided example search which iteratively refines and evaluates the selected example permutations, to find examples that fully depict the task. The experimental results show that LENS significantly outperforms a wide range of baselines and further analyses show that each component contribute critically to the improvements and shed light on the principles of supporting examples and in-context learning.
|
[
"Li, Xiaonan",
"Qiu, Xipeng"
] |
Finding Support Examples for In-Context Learning
|
findings-emnlp.411
|
2302.13539
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.412.bib
|
https://aclanthology.org/2023.findings-emnlp.412/
|
@inproceedings{park-etal-2023-uncovering,
title = "Uncovering the Root of Hate Speech: A Dataset for Identifying Hate Instigating Speech",
author = "Park, Hyoungjun and
Shim, Ho and
Lee, Kyuhan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.412",
doi = "10.18653/v1/2023.findings-emnlp.412",
pages = "6236--6245",
abstract = "While many prior studies have applied computational approaches, such as machine learning, to detect and moderate hate speech, only scant attention has been paid to the task of identifying the underlying cause of hate speech. In this study, we introduce the concept of hate instigating speech, which refers to a specific type of textual posts on online platforms that stimulate or provoke others to engage in hate speech. The identification of hate instigating speech carries substantial practical implications for effective hate speech moderation. Rather than targeting individual instances of hate speech, by focusing on their roots, i.e., hate instigating speech, it becomes possible to significantly reduce the volume of content that requires review for moderation. Additionally, targeting hate instigating speech enables early prevention of the spread and propagation of hate speech, further enhancing the effectiveness of moderation efforts. However, several challenges hinder researchers from addressing the identification of hate instigating speech. First, there is a lack of comprehensive datasets specifically annotated for hate instigation, making it difficult to train and evaluate computational models effectively. Second, the subtle and nuanced nature of hate instigating speech (e.g., seemingly non-offensive texts serve as catalysts for triggering hate speech) makes it difficult to apply off-the-shelf machine learning models to the problem. To address these challenges, in this study, we have developed and released a multilingual dataset specifically designed for the task of identifying hate instigating speech. Specifically, it encompasses both English and Korean, allowing for a comprehensive examination of hate instigating speech across different linguistic contexts. We have applied existing machine learning models to our dataset and the results demonstrate that the extant models alone are insufficient for effectively detecting hate instigating speech. This finding highlights the need for further attention from the academic community to address this specific challenge. We expect our study and dataset to inspire researchers to explore innovative methods that can enhance the accuracy of hate instigating speech detection, ultimately contributing to more effective moderation and prevention of hate speech propagation online.",
}
|
While many prior studies have applied computational approaches, such as machine learning, to detect and moderate hate speech, only scant attention has been paid to the task of identifying the underlying cause of hate speech. In this study, we introduce the concept of hate instigating speech, which refers to a specific type of textual posts on online platforms that stimulate or provoke others to engage in hate speech. The identification of hate instigating speech carries substantial practical implications for effective hate speech moderation. Rather than targeting individual instances of hate speech, by focusing on their roots, i.e., hate instigating speech, it becomes possible to significantly reduce the volume of content that requires review for moderation. Additionally, targeting hate instigating speech enables early prevention of the spread and propagation of hate speech, further enhancing the effectiveness of moderation efforts. However, several challenges hinder researchers from addressing the identification of hate instigating speech. First, there is a lack of comprehensive datasets specifically annotated for hate instigation, making it difficult to train and evaluate computational models effectively. Second, the subtle and nuanced nature of hate instigating speech (e.g., seemingly non-offensive texts serve as catalysts for triggering hate speech) makes it difficult to apply off-the-shelf machine learning models to the problem. To address these challenges, in this study, we have developed and released a multilingual dataset specifically designed for the task of identifying hate instigating speech. Specifically, it encompasses both English and Korean, allowing for a comprehensive examination of hate instigating speech across different linguistic contexts. We have applied existing machine learning models to our dataset and the results demonstrate that the extant models alone are insufficient for effectively detecting hate instigating speech. This finding highlights the need for further attention from the academic community to address this specific challenge. We expect our study and dataset to inspire researchers to explore innovative methods that can enhance the accuracy of hate instigating speech detection, ultimately contributing to more effective moderation and prevention of hate speech propagation online.
|
[
"Park, Hyoungjun",
"Shim, Ho",
"Lee, Kyuhan"
] |
Uncovering the Root of Hate Speech: A Dataset for Identifying Hate Instigating Speech
|
findings-emnlp.412
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.413.bib
|
https://aclanthology.org/2023.findings-emnlp.413/
|
@inproceedings{liu-etal-2023-responsible,
title = "Responsible {AI} Considerations in Text Summarization Research: A Review of Current Practices",
author = "Liu, Yu Lu and
Cao, Meng and
Blodgett, Su Lin and
Cheung, Jackie Chi Kit and
Olteanu, Alexandra and
Trischler, Adam",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.413",
doi = "10.18653/v1/2023.findings-emnlp.413",
pages = "6246--6261",
abstract = "AI and NLP publication venues have increasingly encouraged researchers to reflect on possible ethical considerations, adverse impacts, and other responsible AI issues their work might engender. However, for specific NLP tasks our understanding of how prevalent such issues are, or when and why these issues are likely to arise, remains limited. Focusing on text summarization{---}a common NLP task largely overlooked by the responsible AI community{---}we examine research and reporting practices in the current literature. We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020{--}2022. We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals. We also discuss current evaluation practices and consider how authors discuss the limitations of both prior work and their own work. Overall, we find that relatively few papers engage with possible stakeholders or contexts of use, which limits their consideration of potential downstream adverse impacts or other responsible AI issues. Based on our findings, we make recommendations on concrete practices and research directions.",
}
|
AI and NLP publication venues have increasingly encouraged researchers to reflect on possible ethical considerations, adverse impacts, and other responsible AI issues their work might engender. However, for specific NLP tasks our understanding of how prevalent such issues are, or when and why these issues are likely to arise, remains limited. Focusing on text summarization{---}a common NLP task largely overlooked by the responsible AI community{---}we examine research and reporting practices in the current literature. We conduct a multi-round qualitative analysis of 333 summarization papers from the ACL Anthology published between 2020{--}2022. We focus on how, which, and when responsible AI issues are covered, which relevant stakeholders are considered, and mismatches between stated and realized research goals. We also discuss current evaluation practices and consider how authors discuss the limitations of both prior work and their own work. Overall, we find that relatively few papers engage with possible stakeholders or contexts of use, which limits their consideration of potential downstream adverse impacts or other responsible AI issues. Based on our findings, we make recommendations on concrete practices and research directions.
|
[
"Liu, Yu Lu",
"Cao, Meng",
"Blodgett, Su Lin",
"Cheung, Jackie Chi Kit",
"Olteanu, Alex",
"ra",
"Trischler, Adam"
] |
Responsible AI Considerations in Text Summarization Research: A Review of Current Practices
|
findings-emnlp.413
|
2311.11103
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.414.bib
|
https://aclanthology.org/2023.findings-emnlp.414/
|
@inproceedings{yin-etal-2023-improving,
title = "Improving Speech Translation by Fusing Speech and Text",
author = "Yin, Wenbiao and
Liu, Zhicheng and
Zhao, Chengqi and
Wang, Tao and
Tong, Jian and
Ye, Rong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.414",
doi = "10.18653/v1/2023.findings-emnlp.414",
pages = "6262--6273",
abstract = "In speech translation, leveraging multimodal data to improve model performance and address limitations of individual modalities has shown significant effectiveness. In this paper, we harness the complementary strengths of speech and text to improve speech translation. However, speech and text are disparate modalities, we observe three aspects of modality gap that impede their integration in a speech translation model. To tackle these gaps, we propose **Fuse**-**S**peech-**T**ext (**FuseST**), a cross-modal model which supports three distinct input modalities for translation: speech, text and fused speech-text. We leverage multiple techniques for cross-modal alignment and conduct a comprehensive analysis to assess its impact on speech translation, machine translation and fused speech-text translation. We evaluate FuseST on MuST-C, GigaST and newstest benchmark. Experiments show that the proposed FuseST achieves an average 34.0 BLEU on MuST-C En$\rightarrow$De/Es/Fr (vs SOTA +1.1 BLEU). Further experiments demonstrate that FuseST does not degrade on MT task, as observed in previous works. Instead, it yields an average improvement of 3.2 BLEU over the pre-trained MT model. Code is available at https://github.com/WenbiaoYin/FuseST.",
}
|
In speech translation, leveraging multimodal data to improve model performance and address limitations of individual modalities has shown significant effectiveness. In this paper, we harness the complementary strengths of speech and text to improve speech translation. However, speech and text are disparate modalities, we observe three aspects of modality gap that impede their integration in a speech translation model. To tackle these gaps, we propose **Fuse**-**S**peech-**T**ext (**FuseST**), a cross-modal model which supports three distinct input modalities for translation: speech, text and fused speech-text. We leverage multiple techniques for cross-modal alignment and conduct a comprehensive analysis to assess its impact on speech translation, machine translation and fused speech-text translation. We evaluate FuseST on MuST-C, GigaST and newstest benchmark. Experiments show that the proposed FuseST achieves an average 34.0 BLEU on MuST-C En$\rightarrow$De/Es/Fr (vs SOTA +1.1 BLEU). Further experiments demonstrate that FuseST does not degrade on MT task, as observed in previous works. Instead, it yields an average improvement of 3.2 BLEU over the pre-trained MT model. Code is available at https://github.com/WenbiaoYin/FuseST.
|
[
"Yin, Wenbiao",
"Liu, Zhicheng",
"Zhao, Chengqi",
"Wang, Tao",
"Tong, Jian",
"Ye, Rong"
] |
Improving Speech Translation by Fusing Speech and Text
|
findings-emnlp.414
|
2305.14042
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.415.bib
|
https://aclanthology.org/2023.findings-emnlp.415/
|
@inproceedings{lu-etal-2023-narrative,
title = "Narrative Order Aware Story Generation via Bidirectional Pretraining Model with Optimal Transport Reward",
author = "Lu, Zhicong and
Jin, Li and
Xu, Guangluan and
Hu, Linmei and
Liu, Nayu and
Li, Xiaoyu and
Sun, Xian and
Zhang, Zequn and
Wei, Kaiwen",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.415",
doi = "10.18653/v1/2023.findings-emnlp.415",
pages = "6274--6287",
abstract = "To create a captivating story, a writer often plans a sequence of logically coherent events and ingeniously manipulates the narrative order to generate flashback in place. However, existing storytelling systems suffer from both insufficient understanding of event correlations and inadequate awareness of event temporal order (e.g., go to hospital {\textless}after{\textgreater} get ill), making it challenging to generate high-quality events that balance the logic and narrative order of story. In this paper, we propose a narrative order aware framework BPOT (Bidirectional Pretraining Model with Optimal Transport Reward) for story generation, which presents a bidirectional pretrained model to encode event correlations and pairwise event order. We also design a reinforcement learning algorithm with novel optimal transport reward to further improve the quality of generated events in the fine-tuning stage. Specifically, a narrative order aware event sequence model is pretrained with the joint learning objectives of event blank infilling and pairwise order prediction. Then, reinforcement learning with novel optimal transport reward is designed to further improve the generated event quality in the fine-tuning stage. The novel optimal transport reward captures the mappings between the generated events and the sentences in the story, effectively measuring the quality of generated events. Both automatic and manual evaluation results demonstrate the superiority of our framework in generating logically coherent stories with flashbacks.",
}
|
To create a captivating story, a writer often plans a sequence of logically coherent events and ingeniously manipulates the narrative order to generate flashback in place. However, existing storytelling systems suffer from both insufficient understanding of event correlations and inadequate awareness of event temporal order (e.g., go to hospital {\textless}after{\textgreater} get ill), making it challenging to generate high-quality events that balance the logic and narrative order of story. In this paper, we propose a narrative order aware framework BPOT (Bidirectional Pretraining Model with Optimal Transport Reward) for story generation, which presents a bidirectional pretrained model to encode event correlations and pairwise event order. We also design a reinforcement learning algorithm with novel optimal transport reward to further improve the quality of generated events in the fine-tuning stage. Specifically, a narrative order aware event sequence model is pretrained with the joint learning objectives of event blank infilling and pairwise order prediction. Then, reinforcement learning with novel optimal transport reward is designed to further improve the generated event quality in the fine-tuning stage. The novel optimal transport reward captures the mappings between the generated events and the sentences in the story, effectively measuring the quality of generated events. Both automatic and manual evaluation results demonstrate the superiority of our framework in generating logically coherent stories with flashbacks.
|
[
"Lu, Zhicong",
"Jin, Li",
"Xu, Guangluan",
"Hu, Linmei",
"Liu, Nayu",
"Li, Xiaoyu",
"Sun, Xian",
"Zhang, Zequn",
"Wei, Kaiwen"
] |
Narrative Order Aware Story Generation via Bidirectional Pretraining Model with Optimal Transport Reward
|
findings-emnlp.415
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.416.bib
|
https://aclanthology.org/2023.findings-emnlp.416/
|
@inproceedings{wang-shu-2023-explainable,
title = "Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models",
author = "Wang, Haoran and
Shu, Kai",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.416",
doi = "10.18653/v1/2023.findings-emnlp.416",
pages = "6288--6304",
abstract = "Claim verification plays a crucial role in combating misinformation. While existing works on claim verification have shown promising results, a crucial piece of the puzzle that remains unsolved is to understand how to verify claims without relying on human-annotated data, which is expensive to create at a large scale. Additionally, it is important for models to provide comprehensive explanations that can justify their decisions and assist human fact-checkers. This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK) Reasoning that can verify complex claims and generate explanations without the need for annotated evidence using Large Language Models (LLMs). FOLK leverages the in-context learning ability of LLMs to translate the claim into a First-Order-Logic (FOL) clause consisting of predicates, each corresponding to a sub-claim that needs to be verified. Then, FOLK performs FOL-Guided reasoning over a set of knowledge-grounded question-and-answer pairs to make veracity predictions and generate explanations to justify its decision-making process. This process makes our model highly explanatory, providing clear explanations of its reasoning process in human-readable form. Our experiment results indicate that FOLK outperforms strong baselines on three datasets encompassing various claim verification challenges. Our code and data are available.",
}
|
Claim verification plays a crucial role in combating misinformation. While existing works on claim verification have shown promising results, a crucial piece of the puzzle that remains unsolved is to understand how to verify claims without relying on human-annotated data, which is expensive to create at a large scale. Additionally, it is important for models to provide comprehensive explanations that can justify their decisions and assist human fact-checkers. This paper presents First-Order-Logic-Guided Knowledge-Grounded (FOLK) Reasoning that can verify complex claims and generate explanations without the need for annotated evidence using Large Language Models (LLMs). FOLK leverages the in-context learning ability of LLMs to translate the claim into a First-Order-Logic (FOL) clause consisting of predicates, each corresponding to a sub-claim that needs to be verified. Then, FOLK performs FOL-Guided reasoning over a set of knowledge-grounded question-and-answer pairs to make veracity predictions and generate explanations to justify its decision-making process. This process makes our model highly explanatory, providing clear explanations of its reasoning process in human-readable form. Our experiment results indicate that FOLK outperforms strong baselines on three datasets encompassing various claim verification challenges. Our code and data are available.
|
[
"Wang, Haoran",
"Shu, Kai"
] |
Explainable Claim Verification via Knowledge-Grounded Reasoning with Large Language Models
|
findings-emnlp.416
|
2310.05253
|
[
"https://github.com/wang2226/folk"
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.417.bib
|
https://aclanthology.org/2023.findings-emnlp.417/
|
@inproceedings{coman-etal-2023-strong,
title = "Strong and Efficient Baselines for Open Domain Conversational Question Answering",
author = "Coman, Andrei and
Barlacchi, Gianni and
de Gispert, Adri{\`a}",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.417",
doi = "10.18653/v1/2023.findings-emnlp.417",
pages = "6305--6314",
abstract = "Unlike the Open Domain Question Answering (ODQA) setting, the conversational (ODConvQA) domain has received limited attention when it comes to reevaluating baselines for both efficiency and effectiveness. In this paper, we study the State-of-the-Art (SotA) Dense Passage Retrieval (DPR) retriever and Fusion-in-Decoder (FiD) reader pipeline, and show that it significantly underperforms when applied to ODConvQA tasks due to various limitations. We then propose and evaluate strong yet simple and efficient baselines, by introducing a fast reranking component between the retriever and the reader, and by performing targeted finetuning steps. Experiments on two ODConvQA tasks, namely TopiOCQA and OR-QuAC, show that our method improves the SotA results, while reducing reader{'}s latency by 60{\%}. Finally, we provide new and valuable insights into the development of challenging baselines that serve as a reference for future, more intricate approaches, including those that leverage Large Language Models (LLMs).",
}
|
Unlike the Open Domain Question Answering (ODQA) setting, the conversational (ODConvQA) domain has received limited attention when it comes to reevaluating baselines for both efficiency and effectiveness. In this paper, we study the State-of-the-Art (SotA) Dense Passage Retrieval (DPR) retriever and Fusion-in-Decoder (FiD) reader pipeline, and show that it significantly underperforms when applied to ODConvQA tasks due to various limitations. We then propose and evaluate strong yet simple and efficient baselines, by introducing a fast reranking component between the retriever and the reader, and by performing targeted finetuning steps. Experiments on two ODConvQA tasks, namely TopiOCQA and OR-QuAC, show that our method improves the SotA results, while reducing reader{'}s latency by 60{\%}. Finally, we provide new and valuable insights into the development of challenging baselines that serve as a reference for future, more intricate approaches, including those that leverage Large Language Models (LLMs).
|
[
"Coman, Andrei",
"Barlacchi, Gianni",
"de Gispert, Adri{\\`a}"
] |
Strong and Efficient Baselines for Open Domain Conversational Question Answering
|
findings-emnlp.417
|
2310.14708
|
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.418.bib
|
https://aclanthology.org/2023.findings-emnlp.418/
|
@inproceedings{su-etal-2023-efficient,
title = "Efficient Continue Training of Temporal Language Model with Structural Information",
author = "Su, Zhaochen and
Li, Juntao and
Zhang, Zikang and
Zhou, Zihan and
Zhang, Min",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.418",
doi = "10.18653/v1/2023.findings-emnlp.418",
pages = "6315--6329",
abstract = "Current language models are mainly trained on snap-shots of data gathered at a particular time, which decreases their capability to generalize over time and model language change. To model the \textit{time} variable, existing works have explored temporal language models (e.g., TempoBERT) by directly incorporating the timestamp into the training process. While effective to some extent, these methods are limited by the superficial temporal information brought by timestamps, which fails to learn the inherent changes of linguistic components. In this paper, we empirically confirm that the performance of pre-trained language models (PLMs) is closely affiliated with syntactically changed tokens. Based on this observation, we propose a simple yet effective method named \textit{ \textbf{S}yntax-\textbf{G}uided \textbf{T}emporal \textbf{L}anguage \textbf{M}odel} (SG-TLM), which could learn the inherent language changes by capturing an intrinsic relationship between the \textit{time} prefix and the tokens with salient syntactic change. Experiments on two datasets and three tasks demonstrate that our model outperforms existing temporal language models in both memorization and generalization capabilities. Extensive results further confirm the effectiveness of our approach across different model frameworks, including both encoder-only and decoder-only models (e.g., LLaMA). Our code is available at \url{https://github.com/zhaochen0110/TempoLM}.",
}
|
Current language models are mainly trained on snap-shots of data gathered at a particular time, which decreases their capability to generalize over time and model language change. To model the \textit{time} variable, existing works have explored temporal language models (e.g., TempoBERT) by directly incorporating the timestamp into the training process. While effective to some extent, these methods are limited by the superficial temporal information brought by timestamps, which fails to learn the inherent changes of linguistic components. In this paper, we empirically confirm that the performance of pre-trained language models (PLMs) is closely affiliated with syntactically changed tokens. Based on this observation, we propose a simple yet effective method named \textit{ \textbf{S}yntax-\textbf{G}uided \textbf{T}emporal \textbf{L}anguage \textbf{M}odel} (SG-TLM), which could learn the inherent language changes by capturing an intrinsic relationship between the \textit{time} prefix and the tokens with salient syntactic change. Experiments on two datasets and three tasks demonstrate that our model outperforms existing temporal language models in both memorization and generalization capabilities. Extensive results further confirm the effectiveness of our approach across different model frameworks, including both encoder-only and decoder-only models (e.g., LLaMA). Our code is available at \url{https://github.com/zhaochen0110/TempoLM}.
|
[
"Su, Zhaochen",
"Li, Juntao",
"Zhang, Zikang",
"Zhou, Zihan",
"Zhang, Min"
] |
Efficient Continue Training of Temporal Language Model with Structural Information
|
findings-emnlp.418
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
|
https://aclanthology.org/2023.findings-emnlp.419.bib
|
https://aclanthology.org/2023.findings-emnlp.419/
|
@inproceedings{lin-etal-2023-retrieval,
title = "Retrieval-Augmented Parsing for Complex Graphs by Exploiting Structure and Uncertainty",
author = "Lin, Zi and
Yuan, Quan and
Pasupat, Panupong and
Liu, Jeremiah and
Shang, Jingbo",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.419",
doi = "10.18653/v1/2023.findings-emnlp.419",
pages = "6330--6345",
abstract = "Retrieval augmentation enhances generative language models by retrieving informative exemplars relevant for output prediction. However, in realistic graph parsing problems where the output space is large and complex, classic retrieval methods based on input-sentence similarity can fail to identify the most informative exemplars that target graph elements the model is most struggling about, leading to suboptimal retrieval and compromised prediction under limited retrieval budget. In this work, we improve retrieval-augmented parsing for complex graph problems by exploiting two unique sources of information (1) structural similarity and (2) model uncertainty. We propose $\textit{\textbf{S}tructure-aware and \textbf{U}ncertainty-\textbf{G}uided \textbf{A}daptive \textbf{R}etrieval} \textbf{(SUGAR)}$ that first quantify the model uncertainty in graph prediction and identify its most uncertain subgraphs, and then retrieve exemplars based on their structural similarity with the identified uncertain subgraphs. On a suite of real-world parsing benchmarks with non-trivial graph structure (SMCalflow and E-commerce), SUGAR exhibits a strong advantage over its classic counterparts that do not leverage structure or model uncertainty.",
}
|
Retrieval augmentation enhances generative language models by retrieving informative exemplars relevant for output prediction. However, in realistic graph parsing problems where the output space is large and complex, classic retrieval methods based on input-sentence similarity can fail to identify the most informative exemplars that target graph elements the model is most struggling about, leading to suboptimal retrieval and compromised prediction under limited retrieval budget. In this work, we improve retrieval-augmented parsing for complex graph problems by exploiting two unique sources of information (1) structural similarity and (2) model uncertainty. We propose $\textit{\textbf{S}tructure-aware and \textbf{U}ncertainty-\textbf{G}uided \textbf{A}daptive \textbf{R}etrieval} \textbf{(SUGAR)}$ that first quantify the model uncertainty in graph prediction and identify its most uncertain subgraphs, and then retrieve exemplars based on their structural similarity with the identified uncertain subgraphs. On a suite of real-world parsing benchmarks with non-trivial graph structure (SMCalflow and E-commerce), SUGAR exhibits a strong advantage over its classic counterparts that do not leverage structure or model uncertainty.
|
[
"Lin, Zi",
"Yuan, Quan",
"Pasupat, Panupong",
"Liu, Jeremiah",
"Shang, Jingbo"
] |
Retrieval-Augmented Parsing for Complex Graphs by Exploiting Structure and Uncertainty
|
findings-emnlp.419
| null |
[
""
] | -1 | -1 | -1 | -1 |
[] |
[] |
[] | 0 |
Poster
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.