Datasets:

bibtex_url
stringlengths
41
53
proceedings
stringlengths
38
50
bibtext
stringlengths
528
3.02k
abstract
stringlengths
17
2.35k
authors
sequencelengths
1
44
title
stringlengths
18
190
id
stringlengths
7
19
arxiv_id
stringlengths
0
10
GitHub
sequencelengths
1
1
paper_page
stringclasses
528 values
n_linked_authors
int64
-1
15
upvotes
int64
-1
77
num_comments
int64
-1
10
n_authors
int64
-1
52
Models
sequencelengths
0
100
Datasets
sequencelengths
0
15
Spaces
sequencelengths
0
46
paper_page_exists_pre_conf
int64
0
1
type
stringclasses
2 values
https://aclanthology.org/2023.findings-emnlp.220.bib
https://aclanthology.org/2023.findings-emnlp.220/
@inproceedings{li-etal-2023-watermarking, title = "Watermarking {LLM}s with Weight Quantization", author = "Li, Linyang and Jiang, Botian and Wang, Pengyu and Ren, Ke and Yan, Hang and Qiu, Xipeng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.220", doi = "10.18653/v1/2023.findings-emnlp.220", pages = "3368--3378", abstract = "Abuse of large language models reveals high risks as large language models are being deployed at an astonishing speed. It is important to protect the model weights to avoid malicious usage that violates licenses of open-source large language models. This paper proposes a novel watermarking strategy that plants watermarks in the quantization process of large language models without pre-defined triggers during inference. The watermark works when the model is used in the fp32 mode and remains hidden when the model is quantized to int8, in this way, the users can only inference the model without further supervised fine-tuning of the model. We successfully plant the watermark into open-source large language model weights including GPT-Neo and LLaMA. We hope our proposed method can provide a potential direction for protecting model weights in the era of large language model applications.", }
Abuse of large language models reveals high risks as large language models are being deployed at an astonishing speed. It is important to protect the model weights to avoid malicious usage that violates licenses of open-source large language models. This paper proposes a novel watermarking strategy that plants watermarks in the quantization process of large language models without pre-defined triggers during inference. The watermark works when the model is used in the fp32 mode and remains hidden when the model is quantized to int8, in this way, the users can only inference the model without further supervised fine-tuning of the model. We successfully plant the watermark into open-source large language model weights including GPT-Neo and LLaMA. We hope our proposed method can provide a potential direction for protecting model weights in the era of large language model applications.
[ "Li, Linyang", "Jiang, Botian", "Wang, Pengyu", "Ren, Ke", "Yan, Hang", "Qiu, Xipeng" ]
Watermarking LLMs with Weight Quantization
findings-emnlp.220
2310.11237
[ "https://github.com/twilight92z/quantize-watermark" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.221.bib
https://aclanthology.org/2023.findings-emnlp.221/
@inproceedings{mirzaee-kordjamshidi-2023-disentangling, title = "Disentangling Extraction and Reasoning in Multi-hop Spatial Reasoning", author = "Mirzaee, Roshanak and Kordjamshidi, Parisa", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.221", doi = "10.18653/v1/2023.findings-emnlp.221", pages = "3379--3397", abstract = "Spatial reasoning over text is challenging as the models not only need to extract the direct spatial information from the text but also reason over those and infer implicit spatial relations. Recent studies highlight the struggles even large language models encounter when it comes to performing spatial reasoning over text. In this paper, we explore the potential benefits of disentangling the processes of information extraction and reasoning in models to address this challenge. To explore this, we design various models that disentangle extraction and reasoning(either symbolic or neural) and compare them with state-of-the-art(SOTA) baselines with no explicit design for these parts. Our experimental results consistently demonstrate the efficacy of disentangling, showcasing its ability to enhance models{'} generalizability within realistic data domains.", }
Spatial reasoning over text is challenging as the models not only need to extract the direct spatial information from the text but also reason over those and infer implicit spatial relations. Recent studies highlight the struggles even large language models encounter when it comes to performing spatial reasoning over text. In this paper, we explore the potential benefits of disentangling the processes of information extraction and reasoning in models to address this challenge. To explore this, we design various models that disentangle extraction and reasoning(either symbolic or neural) and compare them with state-of-the-art(SOTA) baselines with no explicit design for these parts. Our experimental results consistently demonstrate the efficacy of disentangling, showcasing its ability to enhance models{'} generalizability within realistic data domains.
[ "Mirzaee, Roshanak", "Kordjamshidi, Parisa" ]
Disentangling Extraction and Reasoning in Multi-hop Spatial Reasoning
findings-emnlp.221
2310.16731
[ "https://github.com/rshnk73/pistaq-sreqa" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.222.bib
https://aclanthology.org/2023.findings-emnlp.222/
@inproceedings{zhang-etal-2023-psyattention, title = "{P}sy{A}ttention: Psychological Attention Model for Personality Detection", author = "Zhang, Baohua and Huang, Yongyi and Cui, Wenyao and Huaping, Zhang and Shang, Jianyun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.222", doi = "10.18653/v1/2023.findings-emnlp.222", pages = "3398--3411", abstract = "Work on personality detection has tended to incorporate psychological features from different personality models, such as BigFive and MBTI. There are more than 900 psychological features, each of which is helpful for personality detection. However, when used in combination, the application of different calculation standards among these features may result in interference between features calculated using distinct systems, thereby introducing noise and reducing performance. This paper adapts different psychological models in the proposed PsyAttention for personality detection, which can effectively encode psychological features, reducing their number by 85{\%}. In experiments on the BigFive and MBTI models, PysAttention achieved average accuracy of 65.66{\%} and 86.30{\%}, respectively, outperforming state-of-the-art methods, indicating that it is effective at encoding psychological features.", }
Work on personality detection has tended to incorporate psychological features from different personality models, such as BigFive and MBTI. There are more than 900 psychological features, each of which is helpful for personality detection. However, when used in combination, the application of different calculation standards among these features may result in interference between features calculated using distinct systems, thereby introducing noise and reducing performance. This paper adapts different psychological models in the proposed PsyAttention for personality detection, which can effectively encode psychological features, reducing their number by 85{\%}. In experiments on the BigFive and MBTI models, PysAttention achieved average accuracy of 65.66{\%} and 86.30{\%}, respectively, outperforming state-of-the-art methods, indicating that it is effective at encoding psychological features.
[ "Zhang, Baohua", "Huang, Yongyi", "Cui, Wenyao", "Huaping, Zhang", "Shang, Jianyun" ]
PsyAttention: Psychological Attention Model for Personality Detection
findings-emnlp.222
2312.00293
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.223.bib
https://aclanthology.org/2023.findings-emnlp.223/
@inproceedings{kim-etal-2023-roast, title = "{R}o{AST}: Robustifying Language Models via Adversarial Perturbation with Selective Training", author = "Kim, Jaehyung and Mao, Yuning and Hou, Rui and Yu, Hanchao and Liang, Davis and Fung, Pascale and Wang, Qifan and Feng, Fuli and Huang, Lifu and Khabsa, Madian", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.223", doi = "10.18653/v1/2023.findings-emnlp.223", pages = "3412--3444", abstract = "Fine-tuning pre-trained language models (LMs) has become the de facto standard in many NLP tasks. Nevertheless, fine-tuned LMs are still prone to robustness issues, such as adversarial robustness and model calibration. Several perspectives of robustness for LMs have been studied independently, but lacking a unified consideration in multiple perspectives. In this paper, we propose Robustifying LMs via Adversarial perturbation with Selective Training (RoAST), a simple yet effective fine-tuning technique to enhance the multi-perspective robustness of LMs in a unified way. RoAST effectively incorporates two important sources for the model robustness, robustness on the perturbed inputs and generalizable knowledge in pre-trained LMs. To be specific, RoAST introduces adversarial perturbation during fine-tuning while the model parameters are selectively updated upon their relative importance to minimize unnecessary deviation. Under a unified evaluation of fine-tuned LMs by incorporating four representative perspectives of model robustness, we demonstrate the effectiveness of RoAST compared to state-of-the-art fine-tuning methods on six different types of LMs, which indicates its usefulness in practice.", }
Fine-tuning pre-trained language models (LMs) has become the de facto standard in many NLP tasks. Nevertheless, fine-tuned LMs are still prone to robustness issues, such as adversarial robustness and model calibration. Several perspectives of robustness for LMs have been studied independently, but lacking a unified consideration in multiple perspectives. In this paper, we propose Robustifying LMs via Adversarial perturbation with Selective Training (RoAST), a simple yet effective fine-tuning technique to enhance the multi-perspective robustness of LMs in a unified way. RoAST effectively incorporates two important sources for the model robustness, robustness on the perturbed inputs and generalizable knowledge in pre-trained LMs. To be specific, RoAST introduces adversarial perturbation during fine-tuning while the model parameters are selectively updated upon their relative importance to minimize unnecessary deviation. Under a unified evaluation of fine-tuned LMs by incorporating four representative perspectives of model robustness, we demonstrate the effectiveness of RoAST compared to state-of-the-art fine-tuning methods on six different types of LMs, which indicates its usefulness in practice.
[ "Kim, Jaehyung", "Mao, Yuning", "Hou, Rui", "Yu, Hanchao", "Liang, Davis", "Fung, Pascale", "Wang, Qifan", "Feng, Fuli", "Huang, Lifu", "Khabsa, Madian" ]
RoAST: Robustifying Language Models via Adversarial Perturbation with Selective Training
findings-emnlp.223
2312.04032
[ "https://github.com/bbuing9/roast" ]
https://huggingface.co/papers/2312.04032
1
1
1
10
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.224.bib
https://aclanthology.org/2023.findings-emnlp.224/
@inproceedings{mahari-etal-2023-law, title = "The Law and {NLP}: Bridging Disciplinary Disconnects", author = "Mahari, Robert and Stammbach, Dominik and Ash, Elliott and Pentland, Alex", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.224", doi = "10.18653/v1/2023.findings-emnlp.224", pages = "3445--3454", abstract = "Legal practice is intrinsically rooted in the fabric of language, yet legal practitioners and scholars have been slow to adopt tools from natural language processing (NLP). At the same time, the legal system is experiencing an access to justice crisis, which could be partially alleviated with NLP. In this position paper, we argue that the slow uptake of NLP in legal practice is exacerbated by a disconnect between the needs of the legal community and the focus of NLP researchers. In a review of recent trends in the legal NLP literature, we find limited overlap between the legal NLP community and legal academia. Our interpretation is that some of the most popular legal NLP tasks fail to address the needs of legal practitioners. We discuss examples of legal NLP tasks that promise to bridge disciplinary disconnects and highlight interesting areas for legal NLP research that remain underexplored.", }
Legal practice is intrinsically rooted in the fabric of language, yet legal practitioners and scholars have been slow to adopt tools from natural language processing (NLP). At the same time, the legal system is experiencing an access to justice crisis, which could be partially alleviated with NLP. In this position paper, we argue that the slow uptake of NLP in legal practice is exacerbated by a disconnect between the needs of the legal community and the focus of NLP researchers. In a review of recent trends in the legal NLP literature, we find limited overlap between the legal NLP community and legal academia. Our interpretation is that some of the most popular legal NLP tasks fail to address the needs of legal practitioners. We discuss examples of legal NLP tasks that promise to bridge disciplinary disconnects and highlight interesting areas for legal NLP research that remain underexplored.
[ "Mahari, Robert", "Stammbach, Dominik", "Ash, Elliott", "Pentl", ", Alex" ]
The Law and NLP: Bridging Disciplinary Disconnects
findings-emnlp.224
2310.14346
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.225.bib
https://aclanthology.org/2023.findings-emnlp.225/
@inproceedings{chen-etal-2023-symbolization, title = "Symbolization, Prompt, and Classification: A Framework for Implicit Speaker Identification in Novels", author = "Chen, Yue and He, Tianwei and Zhou, Hongbin and Gu, Jia-Chen and Lu, Heng and Ling, Zhen-Hua", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.225", doi = "10.18653/v1/2023.findings-emnlp.225", pages = "3455--3467", abstract = "Speaker identification in novel dialogues can be widely applied to various downstream tasks, such as producing multi-speaker audiobooks and converting novels into scripts. However, existing state-of-the-art methods are limited to handling explicit narrative patterns like {``}Tom said, '...''', unable to thoroughly understand long-range contexts and to deal with complex cases. To this end, we propose a framework named SPC, which identifies implicit speakers in novels via symbolization, prompt, and classification. First, SPC symbolizes the mentions of candidate speakers to construct a unified label set. Then, by inserting a prompt we re-formulate speaker identification as a classification task to minimize the gap between the training objectives of speaker identification and the pre-training task. Two auxiliary tasks are also introduced in SPC to enhance long-range context understanding. Experimental results show that SPC outperforms previous methods by a large margin of 4.8{\%} accuracy on the web novel collection, which reduces 47{\%} of speaker identification errors, and also outperforms the emerging ChatGPT. In addition, SPC is more accurate in implicit speaker identification cases that require long-range context semantic understanding.", }
Speaker identification in novel dialogues can be widely applied to various downstream tasks, such as producing multi-speaker audiobooks and converting novels into scripts. However, existing state-of-the-art methods are limited to handling explicit narrative patterns like {``}Tom said, '...''', unable to thoroughly understand long-range contexts and to deal with complex cases. To this end, we propose a framework named SPC, which identifies implicit speakers in novels via symbolization, prompt, and classification. First, SPC symbolizes the mentions of candidate speakers to construct a unified label set. Then, by inserting a prompt we re-formulate speaker identification as a classification task to minimize the gap between the training objectives of speaker identification and the pre-training task. Two auxiliary tasks are also introduced in SPC to enhance long-range context understanding. Experimental results show that SPC outperforms previous methods by a large margin of 4.8{\%} accuracy on the web novel collection, which reduces 47{\%} of speaker identification errors, and also outperforms the emerging ChatGPT. In addition, SPC is more accurate in implicit speaker identification cases that require long-range context semantic understanding.
[ "Chen, Yue", "He, Tianwei", "Zhou, Hongbin", "Gu, Jia-Chen", "Lu, Heng", "Ling, Zhen-Hua" ]
Symbolization, Prompt, and Classification: A Framework for Implicit Speaker Identification in Novels
findings-emnlp.225
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.226.bib
https://aclanthology.org/2023.findings-emnlp.226/
@inproceedings{sarch-etal-2023-open, title = "Open-Ended Instructable Embodied Agents with Memory-Augmented Large Language Models", author = "Sarch, Gabriel and Wu, Yue and Tarr, Michael and Fragkiadaki, Katerina", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.226", doi = "10.18653/v1/2023.findings-emnlp.226", pages = "3468--3500", abstract = "Pre-trained and frozen LLMs can effectively map simple scene re-arrangement instructions to programs over a robot{'}s visuomotor functions through appropriate few-shot example prompting. To parse open-domain natural language and adapt to a user{'}s idiosyncratic procedures, not known during prompt engineering time, fixed prompts fall short. In this paper, we introduce HELPER, an embodied agent equipped with an external memory of language-program pairs that parses free-form human-robot dialogue into action programs through retrieval-augmented LLM prompting: relevant memories are retrieved based on the current dialogue, instruction, correction or VLM description, and used as in-context prompt examples for LLM querying. The memory is expanded during deployment to include pairs of user{'}s language and action plans, to assist future inferences and personalize them to the user{'}s language and routines. HELPER sets a new state-of-the-art in the TEACh benchmark in both Execution from Dialog History (EDH) and Trajectory from Dialogue (TfD), with 1.7x improvement over the previous SOTA for TfD. Our models, code and video results can be found in our project{'}s website: https://helper-agent-llm.github.io.", }
Pre-trained and frozen LLMs can effectively map simple scene re-arrangement instructions to programs over a robot{'}s visuomotor functions through appropriate few-shot example prompting. To parse open-domain natural language and adapt to a user{'}s idiosyncratic procedures, not known during prompt engineering time, fixed prompts fall short. In this paper, we introduce HELPER, an embodied agent equipped with an external memory of language-program pairs that parses free-form human-robot dialogue into action programs through retrieval-augmented LLM prompting: relevant memories are retrieved based on the current dialogue, instruction, correction or VLM description, and used as in-context prompt examples for LLM querying. The memory is expanded during deployment to include pairs of user{'}s language and action plans, to assist future inferences and personalize them to the user{'}s language and routines. HELPER sets a new state-of-the-art in the TEACh benchmark in both Execution from Dialog History (EDH) and Trajectory from Dialogue (TfD), with 1.7x improvement over the previous SOTA for TfD. Our models, code and video results can be found in our project{'}s website: https://helper-agent-llm.github.io.
[ "Sarch, Gabriel", "Wu, Yue", "Tarr, Michael", "Fragkiadaki, Katerina" ]
Open-Ended Instructable Embodied Agents with Memory-Augmented Large Language Models
findings-emnlp.226
2310.15127
[ "" ]
https://huggingface.co/papers/2310.15127
1
0
0
4
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.227.bib
https://aclanthology.org/2023.findings-emnlp.227/
@inproceedings{zhang-etal-2023-act, title = "{ACT}-{SQL}: In-Context Learning for Text-to-{SQL} with Automatically-Generated Chain-of-Thought", author = "Zhang, Hanchong and Cao, Ruisheng and Chen, Lu and Xu, Hongshen and Yu, Kai", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.227", doi = "10.18653/v1/2023.findings-emnlp.227", pages = "3501--3532", abstract = "Recently Large Language Models (LLMs) have been proven to have strong abilities in various domains and tasks. We study the problem of prompt designing in the text-to-SQL task and attempt to improve the LLMs{'} reasoning ability when generating SQL queries. Besides the trivial few-shot in-context learning setting, we design our chain-of-thought (CoT) prompt with a similar method to schema linking. We provide a method named ACT-SQL to automatically generate auto-CoT exemplars and thus the whole process doesn{'}t need manual labeling. Our approach is cost-saving since we only use the LLMs{'} API call once when generating one SQL query. Furthermore, we extend our in-context learning method to the multi-turn text-to-SQL task. The experiment results show that the LLMs{'} performance can benefit from our ACT-SQL approach. Our approach achieves SOTA performance on the Spider dev set among existing in-context learning approaches.", }
Recently Large Language Models (LLMs) have been proven to have strong abilities in various domains and tasks. We study the problem of prompt designing in the text-to-SQL task and attempt to improve the LLMs{'} reasoning ability when generating SQL queries. Besides the trivial few-shot in-context learning setting, we design our chain-of-thought (CoT) prompt with a similar method to schema linking. We provide a method named ACT-SQL to automatically generate auto-CoT exemplars and thus the whole process doesn{'}t need manual labeling. Our approach is cost-saving since we only use the LLMs{'} API call once when generating one SQL query. Furthermore, we extend our in-context learning method to the multi-turn text-to-SQL task. The experiment results show that the LLMs{'} performance can benefit from our ACT-SQL approach. Our approach achieves SOTA performance on the Spider dev set among existing in-context learning approaches.
[ "Zhang, Hanchong", "Cao, Ruisheng", "Chen, Lu", "Xu, Hongshen", "Yu, Kai" ]
ACT-SQL: In-Context Learning for Text-to-SQL with Automatically-Generated Chain-of-Thought
findings-emnlp.227
2310.17342
[ "https://github.com/x-lance/text2sql-gpt" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.228.bib
https://aclanthology.org/2023.findings-emnlp.228/
@inproceedings{sengupta-etal-2023-manifold, title = "Manifold-Preserving Transformers are Effective for Short-Long Range Encoding", author = "Sengupta, Ayan and Akhtar, Md and Chakraborty, Tanmoy", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.228", doi = "10.18653/v1/2023.findings-emnlp.228", pages = "3533--3549", abstract = "Multi-head self-attention-based Transformers have shown promise in different learning tasks. Albeit these models exhibit significant improvement in understanding short-term and long-term contexts from sequences, encoders of Transformers and their variants fail to preserve layer-wise contextual information. Transformers usually project tokens onto sparse manifolds and fail to preserve mathematical equivalence among the token representations. In this work, we propose TransJect, an encoder model that guarantees a theoretical bound for layer-wise distance preservation between a pair of tokens. We propose a simple alternative to dot-product attention to ensure Lipschitz continuity. This allows TransJect to learn injective mappings to transform token representations to different manifolds with similar topology and preserve Euclidean distance between every pair of tokens in subsequent layers. Evaluations across multiple benchmark short- and long-sequence classification tasks show maximum improvements of 6.8{\%} and 5.9{\%}, respectively, over the variants of Transformers. Additionally, TransJect displays 79{\%} better performance than Transformer on the language modeling task. We further highlight the shortcomings of multi-head self-attention from the statistical physics viewpoint. Although multi-head self-attention was incepted to learn different abstraction levels within the networks, our empirical analyses suggest that different attention heads learn randomly and unorderly. In contrast, TransJect adapts a mixture of experts for regularization; these experts are more orderly and balanced and learn different sparse representations from the input sequences. TransJect exhibits very low entropy and can be efficiently scaled to larger depths.", }
Multi-head self-attention-based Transformers have shown promise in different learning tasks. Albeit these models exhibit significant improvement in understanding short-term and long-term contexts from sequences, encoders of Transformers and their variants fail to preserve layer-wise contextual information. Transformers usually project tokens onto sparse manifolds and fail to preserve mathematical equivalence among the token representations. In this work, we propose TransJect, an encoder model that guarantees a theoretical bound for layer-wise distance preservation between a pair of tokens. We propose a simple alternative to dot-product attention to ensure Lipschitz continuity. This allows TransJect to learn injective mappings to transform token representations to different manifolds with similar topology and preserve Euclidean distance between every pair of tokens in subsequent layers. Evaluations across multiple benchmark short- and long-sequence classification tasks show maximum improvements of 6.8{\%} and 5.9{\%}, respectively, over the variants of Transformers. Additionally, TransJect displays 79{\%} better performance than Transformer on the language modeling task. We further highlight the shortcomings of multi-head self-attention from the statistical physics viewpoint. Although multi-head self-attention was incepted to learn different abstraction levels within the networks, our empirical analyses suggest that different attention heads learn randomly and unorderly. In contrast, TransJect adapts a mixture of experts for regularization; these experts are more orderly and balanced and learn different sparse representations from the input sequences. TransJect exhibits very low entropy and can be efficiently scaled to larger depths.
[ "Sengupta, Ayan", "Akhtar, Md", "Chakraborty, Tanmoy" ]
Manifold-Preserving Transformers are Effective for Short-Long Range Encoding
findings-emnlp.228
2310.14206
[ "https://github.com/victor7246/transject" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.229.bib
https://aclanthology.org/2023.findings-emnlp.229/
@inproceedings{vejvar-fujimoto-2023-aspiro, title = "{ASPIRO}: Any-shot Structured Parsing-error-Induced {R}epr{O}mpting for Consistent Data-to-Text Generation", author = "Vejvar, Martin and Fujimoto, Yasutaka", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.229", doi = "10.18653/v1/2023.findings-emnlp.229", pages = "3550--3563", abstract = "We present ASPIRO, an approach for structured data verbalisation into short template sentences in zero to few-shot settings. Unlike previous methods, our approach prompts Large Language Models (LLMs) to directly produce entity-agnostic templates, rather than relying on LLMs to faithfully copy the given example entities, or validating/crafting the templates manually. We incorporate LLM re-prompting, triggered by algorithmic parsing checks, as well as the PARENT metric induced consistency validation to identify and rectify template generation problems in real-time. ASPIRO, compared to direct LLM output, averages 66{\%} parsing error rate reduction in generated verbalisations of RDF triples on the DART dataset. Our best 5-shot text-davinci-003 setup, scoring BLEU of 50.62, METEOR of 45.16, BLEURT of 0.82, NUBIA of 0.87, and PARENT of 0.8962 on the Rel2Text dataset, competes effectively with recent fine-tuned pretrained language models.", }
We present ASPIRO, an approach for structured data verbalisation into short template sentences in zero to few-shot settings. Unlike previous methods, our approach prompts Large Language Models (LLMs) to directly produce entity-agnostic templates, rather than relying on LLMs to faithfully copy the given example entities, or validating/crafting the templates manually. We incorporate LLM re-prompting, triggered by algorithmic parsing checks, as well as the PARENT metric induced consistency validation to identify and rectify template generation problems in real-time. ASPIRO, compared to direct LLM output, averages 66{\%} parsing error rate reduction in generated verbalisations of RDF triples on the DART dataset. Our best 5-shot text-davinci-003 setup, scoring BLEU of 50.62, METEOR of 45.16, BLEURT of 0.82, NUBIA of 0.87, and PARENT of 0.8962 on the Rel2Text dataset, competes effectively with recent fine-tuned pretrained language models.
[ "Vejvar, Martin", "Fujimoto, Yasutaka" ]
ASPIRO: Any-shot Structured Parsing-error-Induced ReprOmpting for Consistent Data-to-Text Generation
findings-emnlp.229
2310.17877
[ "https://github.com/vejvarm/aspiro" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.230.bib
https://aclanthology.org/2023.findings-emnlp.230/
@inproceedings{hou-smith-2023-detecting, title = "Detecting Syntactic Change with Pre-trained Transformer Models", author = "Hou, Liwen and Smith, David", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.230", doi = "10.18653/v1/2023.findings-emnlp.230", pages = "3564--3574", abstract = "We investigate the ability of Transformer-based language models to find syntactic differences between the English of the early 1800s and that of the late 1900s. First, we show that a fine-tuned BERT model can distinguish between text from these two periods using syntactic information only; to show this, we employ a strategy to hide semantic information from the text. Second, we make further use of fine-tuned BERT models to identify specific instances of syntactic change and specific words for which a new part of speech was introduced. To do this, we employ an automatic part-of-speech (POS) tagger and use it to train corpora-specific taggers based only on BERT representations pretrained on different corpora. Notably, our methods of identifying specific candidates for syntactic change avoid using any automatic POS tagger on old text, where its performance may be unreliable; instead, our methods only use untagged old text together with tagged modern text. We examine samples and distributional properties of the model output to validate automatically identified cases of syntactic change. Finally, we use our techniques to confirm the historical rise of the progressive construction, a known example of syntactic change.", }
We investigate the ability of Transformer-based language models to find syntactic differences between the English of the early 1800s and that of the late 1900s. First, we show that a fine-tuned BERT model can distinguish between text from these two periods using syntactic information only; to show this, we employ a strategy to hide semantic information from the text. Second, we make further use of fine-tuned BERT models to identify specific instances of syntactic change and specific words for which a new part of speech was introduced. To do this, we employ an automatic part-of-speech (POS) tagger and use it to train corpora-specific taggers based only on BERT representations pretrained on different corpora. Notably, our methods of identifying specific candidates for syntactic change avoid using any automatic POS tagger on old text, where its performance may be unreliable; instead, our methods only use untagged old text together with tagged modern text. We examine samples and distributional properties of the model output to validate automatically identified cases of syntactic change. Finally, we use our techniques to confirm the historical rise of the progressive construction, a known example of syntactic change.
[ "Hou, Liwen", "Smith, David" ]
Detecting Syntactic Change with Pre-trained Transformer Models
findings-emnlp.230
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.231.bib
https://aclanthology.org/2023.findings-emnlp.231/
@inproceedings{tang-etal-2023-word, title = "Can Word Sense Distribution Detect Semantic Changes of Words?", author = "Tang, Xiaohang and Zhou, Yi and Aida, Taichi and Sen, Procheta and Bollegala, Danushka", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.231", doi = "10.18653/v1/2023.findings-emnlp.231", pages = "3575--3590", abstract = "Semantic Change Detection of words is an important task for various NLP applications that must make time-sensitive predictions. Some words are used over time in novel ways to express new meanings, and these new meanings establish themselves as novel senses of existing words. On the other hand, Word Sense Disambiguation (WSD) methods associate ambiguous words with sense ids, depending on the context in which they occur. Given this relationship between WSD and SCD, we explore the possibility of predicting whether a target word has its meaning changed between two corpora collected at different time steps, by comparing the distributions of senses of that word in each corpora. For this purpose, we use pretrained static sense embeddings to automatically annotate each occurrence of the target word in a corpus with a sense id. Next, we compute the distribution of sense ids of a target word in a given corpus. Finally, we use different divergence or distance measures to quantify the semantic change of the target word across the two given corpora. Our experimental results on SemEval 2020 Task 1 dataset show that word sense distributions can be accurately used to predict semantic changes of words in English, German, Swedish and Latin.", }
Semantic Change Detection of words is an important task for various NLP applications that must make time-sensitive predictions. Some words are used over time in novel ways to express new meanings, and these new meanings establish themselves as novel senses of existing words. On the other hand, Word Sense Disambiguation (WSD) methods associate ambiguous words with sense ids, depending on the context in which they occur. Given this relationship between WSD and SCD, we explore the possibility of predicting whether a target word has its meaning changed between two corpora collected at different time steps, by comparing the distributions of senses of that word in each corpora. For this purpose, we use pretrained static sense embeddings to automatically annotate each occurrence of the target word in a corpus with a sense id. Next, we compute the distribution of sense ids of a target word in a given corpus. Finally, we use different divergence or distance measures to quantify the semantic change of the target word across the two given corpora. Our experimental results on SemEval 2020 Task 1 dataset show that word sense distributions can be accurately used to predict semantic changes of words in English, German, Swedish and Latin.
[ "Tang, Xiaohang", "Zhou, Yi", "Aida, Taichi", "Sen, Procheta", "Bollegala, Danushka" ]
Can Word Sense Distribution Detect Semantic Changes of Words?
findings-emnlp.231
2310.10400
[ "https://github.com/LivNLP/Sense-based-Semantic-Change-Prediction" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.232.bib
https://aclanthology.org/2023.findings-emnlp.232/
@inproceedings{deng-etal-2023-gold, title = "Gold: A Global and Local-aware Denoising Framework for Commonsense Knowledge Graph Noise Detection", author = "Deng, Zheye and Wang, Weiqi and Wang, Zhaowei and Liu, Xin and Song, Yangqiu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.232", doi = "10.18653/v1/2023.findings-emnlp.232", pages = "3591--3608", abstract = "Commonsense Knowledge Graphs (CSKGs) are crucial for commonsense reasoning, yet constructing them through human annotations can be costly. As a result, various automatic methods have been proposed to construct CSKG with larger semantic coverage. However, these unsupervised approaches introduce spurious noise that can lower the quality of the resulting CSKG, which cannot be tackled easily by existing denoising algorithms due to the unique characteristics of nodes and structures in CSKGs. To address this issue, we propose Gold (Global and Local-aware Denoising), a denoising framework for CSKGs that incorporates entity semantic information, global rules, and local structural information from the CSKG. Experiment results demonstrate that Gold outperforms all baseline methods in noise detection tasks on synthetic noisy CSKG benchmarks. Furthermore, we show that denoising a real-world CSKG is effective and even benefits the downstream zero-shot commonsense question-answering task. Our code and data are publicly available at https://github.com/HKUST-KnowComp/GOLD.", }
Commonsense Knowledge Graphs (CSKGs) are crucial for commonsense reasoning, yet constructing them through human annotations can be costly. As a result, various automatic methods have been proposed to construct CSKG with larger semantic coverage. However, these unsupervised approaches introduce spurious noise that can lower the quality of the resulting CSKG, which cannot be tackled easily by existing denoising algorithms due to the unique characteristics of nodes and structures in CSKGs. To address this issue, we propose Gold (Global and Local-aware Denoising), a denoising framework for CSKGs that incorporates entity semantic information, global rules, and local structural information from the CSKG. Experiment results demonstrate that Gold outperforms all baseline methods in noise detection tasks on synthetic noisy CSKG benchmarks. Furthermore, we show that denoising a real-world CSKG is effective and even benefits the downstream zero-shot commonsense question-answering task. Our code and data are publicly available at https://github.com/HKUST-KnowComp/GOLD.
[ "Deng, Zheye", "Wang, Weiqi", "Wang, Zhaowei", "Liu, Xin", "Song, Yangqiu" ]
Gold: A Global and Local-aware Denoising Framework for Commonsense Knowledge Graph Noise Detection
findings-emnlp.232
2310.12011
[ "https://github.com/hkust-knowcomp/gold" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.233.bib
https://aclanthology.org/2023.findings-emnlp.233/
@inproceedings{wang-etal-2023-improving-conversational, title = "Improving Conversational Recommendation Systems via Bias Analysis and Language-Model-Enhanced Data Augmentation", author = "Wang, Xi and Rahmani, Hossein and Liu, Jiqun and Yilmaz, Emine", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.233", doi = "10.18653/v1/2023.findings-emnlp.233", pages = "3609--3622", abstract = "Conversational Recommendation System (CRS) is a rapidly growing research area that has gained significant attention alongside advancements in language modelling techniques. However, the current state of conversational recommendation faces numerous challenges due to its relative novelty and limited existing contributions. In this study, we delve into benchmark datasets for developing CRS models and address potential biases arising from the feedback loop inherent in multi-turn interactions, including selection bias and multiple popularity bias variants. Drawing inspiration from the success of generative data via using language models and data augmentation techniques, we present two novel strategies, {`}Once-Aug{'} and {`}PopNudge{'}, to enhance model performance while mitigating biases. Through extensive experiments on ReDial and TG-ReDial benchmark datasets, we show a consistent improvement of CRS techniques with our data augmentation approaches and offer additional insights on addressing multiple newly formulated biases.", }
Conversational Recommendation System (CRS) is a rapidly growing research area that has gained significant attention alongside advancements in language modelling techniques. However, the current state of conversational recommendation faces numerous challenges due to its relative novelty and limited existing contributions. In this study, we delve into benchmark datasets for developing CRS models and address potential biases arising from the feedback loop inherent in multi-turn interactions, including selection bias and multiple popularity bias variants. Drawing inspiration from the success of generative data via using language models and data augmentation techniques, we present two novel strategies, {`}Once-Aug{'} and {`}PopNudge{'}, to enhance model performance while mitigating biases. Through extensive experiments on ReDial and TG-ReDial benchmark datasets, we show a consistent improvement of CRS techniques with our data augmentation approaches and offer additional insights on addressing multiple newly formulated biases.
[ "Wang, Xi", "Rahmani, Hossein", "Liu, Jiqun", "Yilmaz, Emine" ]
Improving Conversational Recommendation Systems via Bias Analysis and Language-Model-Enhanced Data Augmentation
findings-emnlp.233
2310.16738
[ "https://github.com/wangxieric/bias-crs" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.234.bib
https://aclanthology.org/2023.findings-emnlp.234/
@inproceedings{bao-etal-2023-exploring, title = "Exploring Graph Pre-training for Aspect-based Sentiment Analysis", author = "Bao, Xiaoyi and Wang, Zhongqing and Zhou, Guodong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.234", doi = "10.18653/v1/2023.findings-emnlp.234", pages = "3623--3634", abstract = "Existing studies tend to extract the sentiment elements in a generative manner in order to avoid complex modeling. Despite their effectiveness, they ignore importance of the relationships between sentiment elements that could be crucial, making the large pre-trained generative models sub-optimal for modeling sentiment knowledge. Therefore, we introduce two pre-training paradigms to improve the generation model by exploring graph pre-training that targeting to strengthen the model in capturing the elements{'} relationships. Specifically, We first employ an Element-level Graph Pre-training paradigm, which is designed to improve the structure awareness of the generative model. Then, we design a Task Decomposition Pre-training paradigm to make the generative model generalizable and robust against various irregular sentiment quadruples. Extensive experiments show the superiority of our proposed method, validate the correctness of our motivation.", }
Existing studies tend to extract the sentiment elements in a generative manner in order to avoid complex modeling. Despite their effectiveness, they ignore importance of the relationships between sentiment elements that could be crucial, making the large pre-trained generative models sub-optimal for modeling sentiment knowledge. Therefore, we introduce two pre-training paradigms to improve the generation model by exploring graph pre-training that targeting to strengthen the model in capturing the elements{'} relationships. Specifically, We first employ an Element-level Graph Pre-training paradigm, which is designed to improve the structure awareness of the generative model. Then, we design a Task Decomposition Pre-training paradigm to make the generative model generalizable and robust against various irregular sentiment quadruples. Extensive experiments show the superiority of our proposed method, validate the correctness of our motivation.
[ "Bao, Xiaoyi", "Wang, Zhongqing", "Zhou, Guodong" ]
Exploring Graph Pre-training for Aspect-based Sentiment Analysis
findings-emnlp.234
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.235.bib
https://aclanthology.org/2023.findings-emnlp.235/
@inproceedings{nguyen-etal-2023-demaformer, title = "{D}ema{F}ormer: Damped Exponential Moving Average Transformer with Energy-Based Modeling for Temporal Language Grounding", author = "Nguyen, Thong and Wu, Xiaobao and Dong, Xinshuai and Nguyen, Cong-Duy and Ng, See-Kiong and Luu, Anh", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.235", doi = "10.18653/v1/2023.findings-emnlp.235", pages = "3635--3649", abstract = "Temporal Language Grounding seeks to localize video moments that semantically correspond to a natural language query. Recent advances employ the attention mechanism to learn the relations between video moments and the text query. However, naive attention might not be able to appropriately capture such relations, resulting in ineffective distributions where target video moments are difficult to separate from the remaining ones. To resolve the issue, we propose an energy-based model framework to explicitly learn moment-query distributions. Moreover, we propose DemaFormer, a novel Transformer-based architecture that utilizes exponential moving average with a learnable damping factor to effectively encode moment-query inputs. Comprehensive experiments on four public temporal language grounding datasets showcase the superiority of our methods over the state-of-the-art baselines.", }
Temporal Language Grounding seeks to localize video moments that semantically correspond to a natural language query. Recent advances employ the attention mechanism to learn the relations between video moments and the text query. However, naive attention might not be able to appropriately capture such relations, resulting in ineffective distributions where target video moments are difficult to separate from the remaining ones. To resolve the issue, we propose an energy-based model framework to explicitly learn moment-query distributions. Moreover, we propose DemaFormer, a novel Transformer-based architecture that utilizes exponential moving average with a learnable damping factor to effectively encode moment-query inputs. Comprehensive experiments on four public temporal language grounding datasets showcase the superiority of our methods over the state-of-the-art baselines.
[ "Nguyen, Thong", "Wu, Xiaobao", "Dong, Xinshuai", "Nguyen, Cong-Duy", "Ng, See-Kiong", "Luu, Anh" ]
DemaFormer: Damped Exponential Moving Average Transformer with Energy-Based Modeling for Temporal Language Grounding
findings-emnlp.235
2312.02549
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.236.bib
https://aclanthology.org/2023.findings-emnlp.236/
@inproceedings{kamoda-etal-2023-test, title = "Test-time Augmentation for Factual Probing", author = "Kamoda, Go and Heinzerling, Benjamin and Sakaguchi, Keisuke and Inui, Kentaro", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.236", doi = "10.18653/v1/2023.findings-emnlp.236", pages = "3650--3661", abstract = "Factual probing is a method that uses prompts to test if a language model {``}knows{''} certain world knowledge facts. A problem in factual probing is that small changes to the prompt can lead to large changes in model output. Previous work aimed to alleviate this problem by optimizing prompts via text mining or fine-tuning. However, such approaches are relation-specific and do not generalize to unseen relation types. Here, we propose to use test-time augmentation (TTA) as a relation-agnostic method for reducing sensitivity to prompt variations by automatically augmenting and ensembling prompts at test time. Experiments show improved model calibration, i.e., with TTA, model confidence better reflects prediction accuracy. Improvements in prediction accuracy are observed for some models, but for other models, TTA leads to degradation. Error analysis identifies the difficulty of producing high-quality prompt variations as the main challenge for TTA.", }
Factual probing is a method that uses prompts to test if a language model {``}knows{''} certain world knowledge facts. A problem in factual probing is that small changes to the prompt can lead to large changes in model output. Previous work aimed to alleviate this problem by optimizing prompts via text mining or fine-tuning. However, such approaches are relation-specific and do not generalize to unseen relation types. Here, we propose to use test-time augmentation (TTA) as a relation-agnostic method for reducing sensitivity to prompt variations by automatically augmenting and ensembling prompts at test time. Experiments show improved model calibration, i.e., with TTA, model confidence better reflects prediction accuracy. Improvements in prediction accuracy are observed for some models, but for other models, TTA leads to degradation. Error analysis identifies the difficulty of producing high-quality prompt variations as the main challenge for TTA.
[ "Kamoda, Go", "Heinzerling, Benjamin", "Sakaguchi, Keisuke", "Inui, Kentaro" ]
Test-time Augmentation for Factual Probing
findings-emnlp.236
2310.17121
[ "https://github.com/gokamoda/TTA4FactualProbing" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.237.bib
https://aclanthology.org/2023.findings-emnlp.237/
@inproceedings{hoeken-etal-2023-methodological, title = "Methodological Insights in Detecting Subtle Semantic Shifts with Contextualized and Static Language Models", author = {Hoeken, Sanne and Alacam, {\"O}zge and Fokkens, Antske and Sommerauer, Pia}, editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.237", doi = "10.18653/v1/2023.findings-emnlp.237", pages = "3662--3675", abstract = "In this paper, we investigate automatic detection of subtle semantic shifts between social communities of different political convictions in Dutch and English. We perform a methodological study comparing methods using static and contextualized language models. We investigate the impact of specializing contextualized models through fine-tuning on target corpora, word sense disambiguation and sentiment. We furthermore propose a new approach using masked token prediction, that relies on behavioral information, specifically the most probable substitutions, instead of geometrical comparison of representations. Our results show that methods using static models and our masked token prediction method can detect differences in connotation of politically loaded terms, whereas methods that rely on measuring the distance between contextualized representations are not providing clear signals, even in synthetic scenarios of extreme shifts.", }
In this paper, we investigate automatic detection of subtle semantic shifts between social communities of different political convictions in Dutch and English. We perform a methodological study comparing methods using static and contextualized language models. We investigate the impact of specializing contextualized models through fine-tuning on target corpora, word sense disambiguation and sentiment. We furthermore propose a new approach using masked token prediction, that relies on behavioral information, specifically the most probable substitutions, instead of geometrical comparison of representations. Our results show that methods using static models and our masked token prediction method can detect differences in connotation of politically loaded terms, whereas methods that rely on measuring the distance between contextualized representations are not providing clear signals, even in synthetic scenarios of extreme shifts.
[ "Hoeken, Sanne", "Alacam, {\\\"O}zge", "Fokkens, Antske", "Sommerauer, Pia" ]
Methodological Insights in Detecting Subtle Semantic Shifts with Contextualized and Static Language Models
findings-emnlp.237
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.238.bib
https://aclanthology.org/2023.findings-emnlp.238/
@inproceedings{rohanian-etal-2023-disfluent, title = "Disfluent Cues for Enhanced Speech Understanding in Large Language Models", author = "Rohanian, Morteza and Nooralahzadeh, Farhad and Rohanian, Omid and Clifton, David and Krauthammer, Michael", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.238", doi = "10.18653/v1/2023.findings-emnlp.238", pages = "3676--3684", abstract = "In computational linguistics, the common practice is to {``}clean{''} disfluent content from spontaneous speech. However, we hypothesize that these disfluencies might serve as more than mere noise, potentially acting as informative cues. We use a range of pre-trained models for a reading comprehension task involving disfluent queries, specifically featuring different types of speech repairs. The findings indicate that certain disfluencies can indeed improve model performance, particularly those stemming from context-based adjustments. However, large-scale language models struggle to handle repairs involving decision-making or the correction of lexical or syntactic errors, suggesting a crucial area for potential improvement. This paper thus highlights the importance of a nuanced approach to disfluencies, advocating for their potential utility in enhancing model performance rather than their removal.", }
In computational linguistics, the common practice is to {``}clean{''} disfluent content from spontaneous speech. However, we hypothesize that these disfluencies might serve as more than mere noise, potentially acting as informative cues. We use a range of pre-trained models for a reading comprehension task involving disfluent queries, specifically featuring different types of speech repairs. The findings indicate that certain disfluencies can indeed improve model performance, particularly those stemming from context-based adjustments. However, large-scale language models struggle to handle repairs involving decision-making or the correction of lexical or syntactic errors, suggesting a crucial area for potential improvement. This paper thus highlights the importance of a nuanced approach to disfluencies, advocating for their potential utility in enhancing model performance rather than their removal.
[ "Rohanian, Morteza", "Nooralahzadeh, Farhad", "Rohanian, Omid", "Clifton, David", "Krauthammer, Michael" ]
Disfluent Cues for Enhanced Speech Understanding in Large Language Models
findings-emnlp.238
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.239.bib
https://aclanthology.org/2023.findings-emnlp.239/
@inproceedings{gu-etal-2023-watermarking, title = "Watermarking {PLM}s on Classification Tasks by Combining Contrastive Learning with Weight Perturbation", author = "Gu, Chenxi and Zheng, Xiaoqing and Xu, Jianhan and Wu, Muling and Zhang, Cenyuan and Huang, Chengsong and Cai, Hua and Huang, Xuanjing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.239", doi = "10.18653/v1/2023.findings-emnlp.239", pages = "3685--3694", abstract = "Large pre-trained language models (PLMs) have achieved remarkable success, making them highly valuable intellectual property due to their expensive training costs. Consequently, model watermarking, a method developed to protect the intellectual property of neural models, has emerged as a crucial yet underexplored technique. The problem of watermarking PLMs has remained unsolved since the parameters of PLMs will be updated when fine-tuned on downstream datasets, and then embedded watermarks could be removed easily due to the catastrophic forgetting phenomenon. This study investigates the feasibility of watermarking PLMs by embedding backdoors that can be triggered by specific inputs. We employ contrastive learning during the watermarking phase, allowing the representations of specific inputs to be isolated from others and mapped to a particular label after fine-tuning. Moreover, we demonstrate that by combining weight perturbation with the proposed method, watermarks can be embedded in a flatter region of the loss landscape, thereby increasing their robustness to watermark removal. Extensive experiments on multiple datasets demonstrate that the embedded watermarks can be robustly extracted without any knowledge about downstream tasks, and with a high success rate.", }
Large pre-trained language models (PLMs) have achieved remarkable success, making them highly valuable intellectual property due to their expensive training costs. Consequently, model watermarking, a method developed to protect the intellectual property of neural models, has emerged as a crucial yet underexplored technique. The problem of watermarking PLMs has remained unsolved since the parameters of PLMs will be updated when fine-tuned on downstream datasets, and then embedded watermarks could be removed easily due to the catastrophic forgetting phenomenon. This study investigates the feasibility of watermarking PLMs by embedding backdoors that can be triggered by specific inputs. We employ contrastive learning during the watermarking phase, allowing the representations of specific inputs to be isolated from others and mapped to a particular label after fine-tuning. Moreover, we demonstrate that by combining weight perturbation with the proposed method, watermarks can be embedded in a flatter region of the loss landscape, thereby increasing their robustness to watermark removal. Extensive experiments on multiple datasets demonstrate that the embedded watermarks can be robustly extracted without any knowledge about downstream tasks, and with a high success rate.
[ "Gu, Chenxi", "Zheng, Xiaoqing", "Xu, Jianhan", "Wu, Muling", "Zhang, Cenyuan", "Huang, Chengsong", "Cai, Hua", "Huang, Xuanjing" ]
Watermarking PLMs on Classification Tasks by Combining Contrastive Learning with Weight Perturbation
findings-emnlp.239
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.240.bib
https://aclanthology.org/2023.findings-emnlp.240/
@inproceedings{afrin-etal-2023-banlemma, title = "{B}an{L}emma: A Word Formation Dependent Rule and Dictionary Based {B}angla Lemmatizer", author = "Afrin, Sadia and Chowdhury, Md. Shahad Mahmud and Islam, Md. and Khan, Faisal and Chowdhury, Labib and Mahtab, Md. and Chowdhury, Nazifa and Forkan, Massud and Kundu, Neelima and Arif, Hakim and Rashid, Mohammad Mamun Or and Amin, Mohammad and Mohammed, Nabeel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.240", doi = "10.18653/v1/2023.findings-emnlp.240", pages = "3695--3710", abstract = "Lemmatization holds significance in both natural language processing (NLP) and linguistics, as it effectively decreases data density and aids in comprehending contextual meaning. However, due to the highly inflected nature and morphological richness, lemmatization in Bangla text poses a complex challenge. In this study, we propose linguistic rules for lemmatization and utilize a dictionary along with the rules to design a lemmatizer specifically for Bangla. Our system aims to lemmatize words based on their parts of speech class within a given sentence. Unlike previous rule-based approaches, we analyzed the suffix marker occurrence according to the morpho-syntactic values and then utilized sequences of suffix markers instead of entire suffixes. To develop our rules, we analyze a large corpus of Bangla text from various domains, sources, and time periods to observe the word formation of inflected words. The lemmatizer achieves an accuracy of 96.36{\%} when tested against a manually annotated test dataset by trained linguists and demonstrates competitive performance on three previously published Bangla lemmatization datasets. We are making the code and datasets publicly available at https://github.com/eblict-gigatech/BanLemma in order to contribute to the further advancement of Bangla NLP.", }
Lemmatization holds significance in both natural language processing (NLP) and linguistics, as it effectively decreases data density and aids in comprehending contextual meaning. However, due to the highly inflected nature and morphological richness, lemmatization in Bangla text poses a complex challenge. In this study, we propose linguistic rules for lemmatization and utilize a dictionary along with the rules to design a lemmatizer specifically for Bangla. Our system aims to lemmatize words based on their parts of speech class within a given sentence. Unlike previous rule-based approaches, we analyzed the suffix marker occurrence according to the morpho-syntactic values and then utilized sequences of suffix markers instead of entire suffixes. To develop our rules, we analyze a large corpus of Bangla text from various domains, sources, and time periods to observe the word formation of inflected words. The lemmatizer achieves an accuracy of 96.36{\%} when tested against a manually annotated test dataset by trained linguists and demonstrates competitive performance on three previously published Bangla lemmatization datasets. We are making the code and datasets publicly available at https://github.com/eblict-gigatech/BanLemma in order to contribute to the further advancement of Bangla NLP.
[ "Afrin, Sadia", "Chowdhury, Md. Shahad Mahmud", "Islam, Md.", "Khan, Faisal", "Chowdhury, Labib", "Mahtab, Md.", "Chowdhury, Nazifa", "Forkan, Massud", "Kundu, Neelima", "Arif, Hakim", "Rashid, Mohammad Mamun Or", "Amin, Mohammad", "Mohammed, Nabeel" ]
BanLemma: A Word Formation Dependent Rule and Dictionary Based Bangla Lemmatizer
findings-emnlp.240
2311.03078
[ "https://github.com/eblict-gigatech/BanLemma" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.241.bib
https://aclanthology.org/2023.findings-emnlp.241/
@inproceedings{loya-etal-2023-exploring, title = "Exploring the Sensitivity of {LLM}s{'} Decision-Making Capabilities: Insights from Prompt Variations and Hyperparameters", author = "Loya, Manikanta and Sinha, Divya and Futrell, Richard", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.241", doi = "10.18653/v1/2023.findings-emnlp.241", pages = "3711--3716", abstract = "The advancement of Large Language Models (LLMs) has led to their widespread use across a broad spectrum of tasks, including decision-making. Prior studies have compared the decision-making abilities of LLMs with those of humans from a psychological perspective. However, these studies have not always properly accounted for the sensitivity of LLMs{'} behavior to hyperparameters and variations in the prompt. In this study, we examine LLMs{'} performance on the Horizon decision-making task studied by Binz and Schulz (2023), analyzing how LLMs respond to variations in prompts and hyperparameters. By experimenting on three OpenAI language models possessing different capabilities, we observe that the decision-making abilities fluctuate based on the input prompts and temperature settings. Contrary to previous findings, language models display a human-like exploration{--}exploitation tradeoff after simple adjustments to the prompt.", }
The advancement of Large Language Models (LLMs) has led to their widespread use across a broad spectrum of tasks, including decision-making. Prior studies have compared the decision-making abilities of LLMs with those of humans from a psychological perspective. However, these studies have not always properly accounted for the sensitivity of LLMs{'} behavior to hyperparameters and variations in the prompt. In this study, we examine LLMs{'} performance on the Horizon decision-making task studied by Binz and Schulz (2023), analyzing how LLMs respond to variations in prompts and hyperparameters. By experimenting on three OpenAI language models possessing different capabilities, we observe that the decision-making abilities fluctuate based on the input prompts and temperature settings. Contrary to previous findings, language models display a human-like exploration{--}exploitation tradeoff after simple adjustments to the prompt.
[ "Loya, Manikanta", "Sinha, Divya", "Futrell, Richard" ]
Exploring the Sensitivity of LLMs' Decision-Making Capabilities: Insights from Prompt Variations and Hyperparameters
findings-emnlp.241
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.242.bib
https://aclanthology.org/2023.findings-emnlp.242/
@inproceedings{luo-etal-2023-search, title = "Search Augmented Instruction Learning", author = "Luo, Hongyin and Zhang, Tianhua and Chuang, Yung-Sung and Gong, Yuan and Kim, Yoon and Wu, Xixin and Meng, Helen and Glass, James", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.242", doi = "10.18653/v1/2023.findings-emnlp.242", pages = "3717--3729", abstract = "Large language models (LLMs) have been significantly improved by instruction fine-tuning, but still lack transparency and the ability to utilize up-to-date knowledge and information. In this work, we propose search-augmented instruction learning (SAIL), which grounds the language generation and instruction following abilities on complex search results generated by in-house and external search engines. With an instruction tuning corpus, we collect search results for each training case from different search APIs and domains, and construct a new search-grounded training set containing (instruction, grounding information, response) triplets. We then fine-tune the LLaMA-7B model on the constructed training set. Since the collected results contain unrelated and disputing languages, the model needs to learn to ground on trustworthy search results, filter out distracting passages, and generate the target response. The search result-denoising process entails explicit trustworthy information selection and multi-hop reasoning, since the retrieved passages might be informative but not contain the instruction-following answer. Experiments show that the fine-tuned SAIL-7B model has a strong instruction-following ability, and it performs significantly better on transparency-sensitive tasks, including open-ended question answering and fact checking.", }
Large language models (LLMs) have been significantly improved by instruction fine-tuning, but still lack transparency and the ability to utilize up-to-date knowledge and information. In this work, we propose search-augmented instruction learning (SAIL), which grounds the language generation and instruction following abilities on complex search results generated by in-house and external search engines. With an instruction tuning corpus, we collect search results for each training case from different search APIs and domains, and construct a new search-grounded training set containing (instruction, grounding information, response) triplets. We then fine-tune the LLaMA-7B model on the constructed training set. Since the collected results contain unrelated and disputing languages, the model needs to learn to ground on trustworthy search results, filter out distracting passages, and generate the target response. The search result-denoising process entails explicit trustworthy information selection and multi-hop reasoning, since the retrieved passages might be informative but not contain the instruction-following answer. Experiments show that the fine-tuned SAIL-7B model has a strong instruction-following ability, and it performs significantly better on transparency-sensitive tasks, including open-ended question answering and fact checking.
[ "Luo, Hongyin", "Zhang, Tianhua", "Chuang, Yung-Sung", "Gong, Yuan", "Kim, Yoon", "Wu, Xixin", "Meng, Helen", "Glass, James" ]
Search Augmented Instruction Learning
findings-emnlp.242
[ "" ]
https://huggingface.co/papers/2305.15225
1
2
0
9
[ "lukasmoeller/mpt-7b-sail-ep1", "luohy/SAIL-7b" ]
[ "lukasmoeller/sail_preprocessed" ]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.243.bib
https://aclanthology.org/2023.findings-emnlp.243/
@inproceedings{wan-etal-2023-kelly, title = "{``}Kelly is a Warm Person, Joseph is a Role Model{''}: Gender Biases in {LLM}-Generated Reference Letters", author = "Wan, Yixin and Pu, George and Sun, Jiao and Garimella, Aparna and Chang, Kai-Wei and Peng, Nanyun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.243", doi = "10.18653/v1/2023.findings-emnlp.243", pages = "3730--3748", abstract = "Large Language Models (LLMs) have recently emerged as an effective tool to assist individuals in writing various types of content, including professional documents such as recommendation letters. Though bringing convenience, this application also introduces unprecedented fairness concerns. Model-generated reference letters might be directly used by users in professional scenarios. If underlying biases exist in these model-constructed letters, using them without scrutinization could lead to direct societal harms, such as sabotaging application success rates for female applicants. In light of this pressing issue, it is imminent and necessary to comprehensively study fairness issues and associated harms in this real-world use case. In this paper, we critically examine gender biases in LLM-generated reference letters. Drawing inspiration from social science findings, we design evaluation methods to manifest biases through 2 dimensions: (1) biases in language style and (2) biases in lexical content. We further investigate the extent of bias propagation by analyzing the hallucination bias of models, a term that we define to be bias exacerbation in model-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs- ChatGPT and Alpaca, we reveal significant gender biases in LLM-generated recommendation letters. Our findings not only warn against using LLMs for this application without scrutinization, but also illuminate the importance of thoroughly studying hidden biases and harms in LLM-generated professional documents.", }
Large Language Models (LLMs) have recently emerged as an effective tool to assist individuals in writing various types of content, including professional documents such as recommendation letters. Though bringing convenience, this application also introduces unprecedented fairness concerns. Model-generated reference letters might be directly used by users in professional scenarios. If underlying biases exist in these model-constructed letters, using them without scrutinization could lead to direct societal harms, such as sabotaging application success rates for female applicants. In light of this pressing issue, it is imminent and necessary to comprehensively study fairness issues and associated harms in this real-world use case. In this paper, we critically examine gender biases in LLM-generated reference letters. Drawing inspiration from social science findings, we design evaluation methods to manifest biases through 2 dimensions: (1) biases in language style and (2) biases in lexical content. We further investigate the extent of bias propagation by analyzing the hallucination bias of models, a term that we define to be bias exacerbation in model-hallucinated contents. Through benchmarking evaluation on 2 popular LLMs- ChatGPT and Alpaca, we reveal significant gender biases in LLM-generated recommendation letters. Our findings not only warn against using LLMs for this application without scrutinization, but also illuminate the importance of thoroughly studying hidden biases and harms in LLM-generated professional documents.
[ "Wan, Yixin", "Pu, George", "Sun, Jiao", "Garimella, Aparna", "Chang, Kai-Wei", "Peng, Nanyun" ]
“Kelly is a Warm Person, Joseph is a Role Model”: Gender Biases in LLM-Generated Reference Letters
findings-emnlp.243
[ "https://github.com/uclanlp/biases-llm-reference-letters" ]
https://huggingface.co/papers/2310.09219
1
0
0
6
[ "emmatliu/language-agency-classifier" ]
[ "elaine1wan/Language-Agency-Classification", "elaine1wan/Reference-Letter-Bias-Prompts" ]
[ "emmatliu/LLMReferenceLetterBias" ]
1
Poster
https://aclanthology.org/2023.findings-emnlp.244.bib
https://aclanthology.org/2023.findings-emnlp.244/
@inproceedings{zhou-etal-2023-textmixer, title = "{T}ext{M}ixer: Mixing Multiple Inputs for Privacy-Preserving Inference", author = "Zhou, Xin and Lu, Yi and Ma, Ruotian and Gui, Tao and Zhang, Qi and Huang, Xuanjing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.244", doi = "10.18653/v1/2023.findings-emnlp.244", pages = "3749--3762", abstract = "Pre-trained language models (PLMs) are often deployed as cloud services, enabling users to upload textual data and perform inference remotely. However, users{'} personal text often contains sensitive information, and sharing such data directly with the service providers can lead to serious privacy leakage. To address this problem, we introduce a novel privacy-preserving inference framework called \textbf{ \textit{MixPi} }, which prevents plaintext leakage during the inference phase. Inspired by $k$-anonymity, MixPi aims to obfuscate a user{'}s private input by mixing it with multiple other inputs, thereby confounding potential privacy attackers. To achieve this, our approach involves: (1) proposing a novel encryption module, Privacy Mixer, which encrypts input from three distinct dimensions: mixing, representation, and position. (2) adopting a pre-trained Multi-input Multi-output network to handle mixed representations and obtain multiple predictions. (3) employing a Privacy Demixer to ensure only the user can decrypt the real output among the multiple predictions. Furthermore, we explore different ways to automatically generate synthetic inputs required for mixing. Experimental results on token and sentence classification tasks demonstrate that MixPi greatly surpasses existing privacy-preserving methods in both performance and privacy.", }
Pre-trained language models (PLMs) are often deployed as cloud services, enabling users to upload textual data and perform inference remotely. However, users{'} personal text often contains sensitive information, and sharing such data directly with the service providers can lead to serious privacy leakage. To address this problem, we introduce a novel privacy-preserving inference framework called \textbf{ \textit{MixPi} }, which prevents plaintext leakage during the inference phase. Inspired by $k$-anonymity, MixPi aims to obfuscate a user{'}s private input by mixing it with multiple other inputs, thereby confounding potential privacy attackers. To achieve this, our approach involves: (1) proposing a novel encryption module, Privacy Mixer, which encrypts input from three distinct dimensions: mixing, representation, and position. (2) adopting a pre-trained Multi-input Multi-output network to handle mixed representations and obtain multiple predictions. (3) employing a Privacy Demixer to ensure only the user can decrypt the real output among the multiple predictions. Furthermore, we explore different ways to automatically generate synthetic inputs required for mixing. Experimental results on token and sentence classification tasks demonstrate that MixPi greatly surpasses existing privacy-preserving methods in both performance and privacy.
[ "Zhou, Xin", "Lu, Yi", "Ma, Ruotian", "Gui, Tao", "Zhang, Qi", "Huang, Xuanjing" ]
TextMixer: Mixing Multiple Inputs for Privacy-Preserving Inference
findings-emnlp.244
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.245.bib
https://aclanthology.org/2023.findings-emnlp.245/
@inproceedings{kim-etal-2023-fineprompt, title = "{F}ine{P}rompt: Unveiling the Role of Finetuned Inductive Bias on Compositional Reasoning in {GPT}-4", author = "Kim, Jeonghwan and Hong, Giwon and Myaeng, Sung-Hyon and Whang, Joyce", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.245", doi = "10.18653/v1/2023.findings-emnlp.245", pages = "3763--3775", abstract = "Compositional reasoning across texts has been a long-standing challenge in natural language processing. With large language models like GPT-4 taking over the field, prompting techniques such as chain-of-thought (CoT) were proposed to unlock compositional, multi-step reasoning capabilities of LLMs. Despite their success, the prompts demand significant human effort to discover and validate them. Our work draws attention to the idea of transferring task-specific inductive biases from finetuned models to prompts, as a way of improving GPT-4{'}s compositional reasoning capabilities. To leverage these inductive biases, we formulate prompt templates to ease the transfer of inductive biases. The experimental results on multi-hop question answering and numerical reasoning over text show that our proposed prompt scheme shows competitive zero-shot and few-shot performances compared to existing prompts on complicated reasoning tasks, highlighting the importance of adopting the validated biases of the previous paradigm.", }
Compositional reasoning across texts has been a long-standing challenge in natural language processing. With large language models like GPT-4 taking over the field, prompting techniques such as chain-of-thought (CoT) were proposed to unlock compositional, multi-step reasoning capabilities of LLMs. Despite their success, the prompts demand significant human effort to discover and validate them. Our work draws attention to the idea of transferring task-specific inductive biases from finetuned models to prompts, as a way of improving GPT-4{'}s compositional reasoning capabilities. To leverage these inductive biases, we formulate prompt templates to ease the transfer of inductive biases. The experimental results on multi-hop question answering and numerical reasoning over text show that our proposed prompt scheme shows competitive zero-shot and few-shot performances compared to existing prompts on complicated reasoning tasks, highlighting the importance of adopting the validated biases of the previous paradigm.
[ "Kim, Jeonghwan", "Hong, Giwon", "Myaeng, Sung-Hyon", "Whang, Joyce" ]
FinePrompt: Unveiling the Role of Finetuned Inductive Bias on Compositional Reasoning in GPT-4
findings-emnlp.245
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.246.bib
https://aclanthology.org/2023.findings-emnlp.246/
@inproceedings{chaudhary-etal-2023-teacher, title = "Teacher Perception of Automatically Extracted Grammar Concepts for {L}2 Language Learning", author = "Chaudhary, Aditi and Sampath, Arun and Sheshadri, Ashwin and Anastasopoulos, Antonios and Neubig, Graham", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.246", doi = "10.18653/v1/2023.findings-emnlp.246", pages = "3776--3793", abstract = "One of the challenges in language teaching is how best to organize rules regarding syntax, semantics, or phonology in a meaningful manner. This not only requires content creators to have pedagogical skills, but also have that language{'}s deep understanding. While comprehensive materials to develop such curricula are available in English and some broadly spoken languages, for many other languages, teachers need to manually create them in response to their students{'} needs. This is challenging because i) it requires that such experts be accessible and have the necessary resources, and ii) describing all the intricacies of a language is time-consuming and prone to omission. In this work, we aim to facilitate this process by automatically discovering and visualizing grammar descriptions. We extract descriptions from a natural text corpus that answer questions about morphosyntax (learning of word order, agreement, case marking, or word formation) and semantics (learning of vocabulary). We apply this method for teaching two Indian languages, Kannada and Marathi, which, unlike English, do not have well-developed resources for second language learning. To assess the perceived utility of the extracted material, we enlist the help of language educators from schools in North America to perform a manual evaluation, who find the materials have potential to be used for their lesson preparation and learner evaluation.", }
One of the challenges in language teaching is how best to organize rules regarding syntax, semantics, or phonology in a meaningful manner. This not only requires content creators to have pedagogical skills, but also have that language{'}s deep understanding. While comprehensive materials to develop such curricula are available in English and some broadly spoken languages, for many other languages, teachers need to manually create them in response to their students{'} needs. This is challenging because i) it requires that such experts be accessible and have the necessary resources, and ii) describing all the intricacies of a language is time-consuming and prone to omission. In this work, we aim to facilitate this process by automatically discovering and visualizing grammar descriptions. We extract descriptions from a natural text corpus that answer questions about morphosyntax (learning of word order, agreement, case marking, or word formation) and semantics (learning of vocabulary). We apply this method for teaching two Indian languages, Kannada and Marathi, which, unlike English, do not have well-developed resources for second language learning. To assess the perceived utility of the extracted material, we enlist the help of language educators from schools in North America to perform a manual evaluation, who find the materials have potential to be used for their lesson preparation and learner evaluation.
[ "Chaudhary, Aditi", "Sampath, Arun", "Sheshadri, Ashwin", "Anastasopoulos, Antonios", "Neubig, Graham" ]
Teacher Perception of Automatically Extracted Grammar Concepts for L2 Language Learning
findings-emnlp.246
2206.05154
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.247.bib
https://aclanthology.org/2023.findings-emnlp.247/
@inproceedings{sun-etal-2023-allies, title = "Allies: Prompting Large Language Model with Beam Search", author = "Sun, Hao and Liu, Xiao and Gong, Yeyun and Zhang, Yan and Jiang, Daxin and Yang, Linjun and Duan, Nan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.247", doi = "10.18653/v1/2023.findings-emnlp.247", pages = "3794--3805", abstract = "With the advance of large language models (LLMs), the research field of LLM applications becomes more and more popular and the idea of constructing pipelines to accomplish complex tasks by stacking LLM API calls come true. However, this kind of methods face two limitations: narrow information coverage and low fault tolerance. In this work, we propose a novel method called ALLIES. Given an input query, ALLIES leverages LLMs to iteratively generate new queries related to the original query, enabling an iterative reasoning process. By iteratively refining and expanding the scope of the original query, ALLIES captures and utilizes hidden knowledge that may not be directly obtainable through retrieval. We take zero-shot open-domain question answering (ODQA) as an application scene and evaluate ALLIES on the widely-used benchmarks, such as NQ, WebQ and TriviaQA. The experimental results demonstrate that ALLIES significantly outperforms other zero-shot baselines, indicating its effectiveness in tackling those challenges. Our code is available in https://github.com/microsoft/SimXNS/tree/main/ALLIES.", }
With the advance of large language models (LLMs), the research field of LLM applications becomes more and more popular and the idea of constructing pipelines to accomplish complex tasks by stacking LLM API calls come true. However, this kind of methods face two limitations: narrow information coverage and low fault tolerance. In this work, we propose a novel method called ALLIES. Given an input query, ALLIES leverages LLMs to iteratively generate new queries related to the original query, enabling an iterative reasoning process. By iteratively refining and expanding the scope of the original query, ALLIES captures and utilizes hidden knowledge that may not be directly obtainable through retrieval. We take zero-shot open-domain question answering (ODQA) as an application scene and evaluate ALLIES on the widely-used benchmarks, such as NQ, WebQ and TriviaQA. The experimental results demonstrate that ALLIES significantly outperforms other zero-shot baselines, indicating its effectiveness in tackling those challenges. Our code is available in https://github.com/microsoft/SimXNS/tree/main/ALLIES.
[ "Sun, Hao", "Liu, Xiao", "Gong, Yeyun", "Zhang, Yan", "Jiang, Daxin", "Yang, Linjun", "Duan, Nan" ]
Allies: Prompting Large Language Model with Beam Search
findings-emnlp.247
2305.14766
[ "https://github.com/microsoft/simxns" ]
https://huggingface.co/papers/2305.14766
1
0
0
7
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.248.bib
https://aclanthology.org/2023.findings-emnlp.248/
@inproceedings{pan-etal-2023-logic, title = "Logic-{LM}: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning", author = "Pan, Liangming and Albalak, Alon and Wang, Xinyi and Wang, William", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.248", doi = "10.18653/v1/2023.findings-emnlp.248", pages = "3806--3824", abstract = "Large Language Models (LLMs) have shown human-like reasoning abilities but still struggle with complex logical problems. This paper introduces a novel framework, Logic-LM, which integrates LLMs with symbolic solvers to improve logical problem-solving. Our method first utilizes LLMs to translate a natural language problem into a symbolic formulation. Afterward, a deterministic symbolic solver performs inference on the formulated problem. We also introduce a self-refinement module, which utilizes the symbolic solver{'}s error messages to revise symbolic formalizations. We demonstrate Logic-LM{'}s effectiveness on five logical reasoning datasets: ProofWriter, PrOntoQA, FOLIO, LogicalDeduction, and AR-LSAT. On average, Logic-LM achieves a significant performance boost of 39.2{\%} over using LLM alone with standard prompting and 18.4{\%} over LLM with chain-of-thought prompting. Our findings suggest that Logic-LM, by combining LLMs with symbolic logic, offers a promising avenue for faithful logical reasoning.", }
Large Language Models (LLMs) have shown human-like reasoning abilities but still struggle with complex logical problems. This paper introduces a novel framework, Logic-LM, which integrates LLMs with symbolic solvers to improve logical problem-solving. Our method first utilizes LLMs to translate a natural language problem into a symbolic formulation. Afterward, a deterministic symbolic solver performs inference on the formulated problem. We also introduce a self-refinement module, which utilizes the symbolic solver{'}s error messages to revise symbolic formalizations. We demonstrate Logic-LM{'}s effectiveness on five logical reasoning datasets: ProofWriter, PrOntoQA, FOLIO, LogicalDeduction, and AR-LSAT. On average, Logic-LM achieves a significant performance boost of 39.2{\%} over using LLM alone with standard prompting and 18.4{\%} over LLM with chain-of-thought prompting. Our findings suggest that Logic-LM, by combining LLMs with symbolic logic, offers a promising avenue for faithful logical reasoning.
[ "Pan, Liangming", "Albalak, Alon", "Wang, Xinyi", "Wang, William" ]
Logic-LM: Empowering Large Language Models with Symbolic Solvers for Faithful Logical Reasoning
findings-emnlp.248
2305.12295
[ "https://github.com/teacherpeterpan/logic-llm" ]
https://huggingface.co/papers/2305.12295
1
0
0
4
[]
[ "renma/ProntoQA", "renma/ProofWriter" ]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.249.bib
https://aclanthology.org/2023.findings-emnlp.249/
@inproceedings{liu-etal-2023-simfy, title = "{S}i{MF}y: A Simple Yet Effective Approach for Temporal Knowledge Graph Reasoning", author = "Liu, Zhengtao and Tan, Lei and Li, Mengfan and Wan, Yao and Jin, Hai and Shi, Xuanhua", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.249", doi = "10.18653/v1/2023.findings-emnlp.249", pages = "3825--3836", abstract = "Temporal Knowledge Graph (TKG) reasoning, which focuses on leveraging temporal information to infer future facts in knowledge graphs, plays a vital role in knowledge graph completion. Typically, existing works for this task design graph neural networks and recurrent neural networks to respectively capture the structural and temporal information in KGs. Despite their effectiveness, in our practice, we find that they tend to suffer the issues of low training efficiency and insufficient generalization ability, which can be attributed to the over design of model architectures. To this end, this paper aims to figure out whether the current complex model architectures are necessary for temporal knowledge graph reasoning. As a result, we put forward a simple yet effective approach (termed SiMFy), which simply utilizes multilayer perceptron (MLP) to model the structural dependencies of events and adopts a fixed-frequency strategy to incorporate historical frequency during inference. Extensive experiments on real-world datasets demonstrate that our SiMFy can reach state-of-the-art performance with the following strengths: 1) faster convergence speed and better generalization ability; 2) a much smaller time consumption in the training process; and 3) better ability to capture the structural dependencies of events in KGs. These results provide evidence that the substitution of complex models with simpler counterparts is a feasible strategy.", }
Temporal Knowledge Graph (TKG) reasoning, which focuses on leveraging temporal information to infer future facts in knowledge graphs, plays a vital role in knowledge graph completion. Typically, existing works for this task design graph neural networks and recurrent neural networks to respectively capture the structural and temporal information in KGs. Despite their effectiveness, in our practice, we find that they tend to suffer the issues of low training efficiency and insufficient generalization ability, which can be attributed to the over design of model architectures. To this end, this paper aims to figure out whether the current complex model architectures are necessary for temporal knowledge graph reasoning. As a result, we put forward a simple yet effective approach (termed SiMFy), which simply utilizes multilayer perceptron (MLP) to model the structural dependencies of events and adopts a fixed-frequency strategy to incorporate historical frequency during inference. Extensive experiments on real-world datasets demonstrate that our SiMFy can reach state-of-the-art performance with the following strengths: 1) faster convergence speed and better generalization ability; 2) a much smaller time consumption in the training process; and 3) better ability to capture the structural dependencies of events in KGs. These results provide evidence that the substitution of complex models with simpler counterparts is a feasible strategy.
[ "Liu, Zhengtao", "Tan, Lei", "Li, Mengfan", "Wan, Yao", "Jin, Hai", "Shi, Xuanhua" ]
SiMFy: A Simple Yet Effective Approach for Temporal Knowledge Graph Reasoning
findings-emnlp.249
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.250.bib
https://aclanthology.org/2023.findings-emnlp.250/
@inproceedings{wang-etal-2023-understanding, title = "Understanding Translationese in Cross-Lingual Summarization", author = "Wang, Jiaan and Meng, Fandong and Liang, Yunlong and Zhang, Tingyi and Xu, Jiarong and Li, Zhixu and Zhou, Jie", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.250", doi = "10.18653/v1/2023.findings-emnlp.250", pages = "3837--3849", abstract = "Given a document in a source language, cross-lingual summarization (CLS) aims at generating a concise summary in a different target language. Unlike monolingual summarization (MS), naturally occurring source-language documents paired with target-language summaries are rare. To collect large-scale CLS data, existing datasets typically involve translation in their creation. However, the translated text is distinguished from the text originally written in that language, i.e., translationese. In this paper, we first confirm that different approaches of constructing CLS datasets will lead to different degrees of translationese. Then we systematically investigate how translationese affects CLS model evaluation and performance when it appears in source documents or target summaries. In detail, we find that (1) the translationese in documents or summaries of test sets might lead to the discrepancy between human judgment and automatic evaluation; (2) the translationese in training sets would harm model performance in real-world applications; (3) though machine-translated documents involve translationese, they are very useful for building CLS systems on low-resource languages under specific training strategies. Lastly, we give suggestions for future CLS research including dataset and model developments. We hope that our work could let researchers notice the phenomenon of translationese in CLS and take it into account in the future.", }
Given a document in a source language, cross-lingual summarization (CLS) aims at generating a concise summary in a different target language. Unlike monolingual summarization (MS), naturally occurring source-language documents paired with target-language summaries are rare. To collect large-scale CLS data, existing datasets typically involve translation in their creation. However, the translated text is distinguished from the text originally written in that language, i.e., translationese. In this paper, we first confirm that different approaches of constructing CLS datasets will lead to different degrees of translationese. Then we systematically investigate how translationese affects CLS model evaluation and performance when it appears in source documents or target summaries. In detail, we find that (1) the translationese in documents or summaries of test sets might lead to the discrepancy between human judgment and automatic evaluation; (2) the translationese in training sets would harm model performance in real-world applications; (3) though machine-translated documents involve translationese, they are very useful for building CLS systems on low-resource languages under specific training strategies. Lastly, we give suggestions for future CLS research including dataset and model developments. We hope that our work could let researchers notice the phenomenon of translationese in CLS and take it into account in the future.
[ "Wang, Jiaan", "Meng, F", "ong", "Liang, Yunlong", "Zhang, Tingyi", "Xu, Jiarong", "Li, Zhixu", "Zhou, Jie" ]
Understanding Translationese in Cross-Lingual Summarization
findings-emnlp.250
2212.07220
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.251.bib
https://aclanthology.org/2023.findings-emnlp.251/
@inproceedings{hagag-tsarfaty-2023-truth, title = "The Truth, The Whole Truth, and Nothing but the Truth: A New Benchmark Dataset for {H}ebrew Text Credibility Assessment", author = "Hagag, Ben and Tsarfaty, Reut", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.251", doi = "10.18653/v1/2023.findings-emnlp.251", pages = "3850--3865", abstract = "In the age of information overload, it is more important than ever to discern fact from fiction. From the internet to traditional media, we are constantly confronted with a deluge of information, much of which comes from politicians and other public figures who wield significant influence. In this paper, we introduce HeTrue: a new, publicly available dataset for evaluating the credibility of statements made by Israeli public figures and politicians. This dataset consists of 1021 statements, manually annotated by Israeli professional journalists, for their credibility status. Using this corpus, we set out to assess whether the credibility of statements can be predicted based on the text alone. To establish a baseline, we compare text-only methods with others using additional data like metadata, context, and evidence. Furthermore, we develop several credibility assessment models, including a feature-based model that utilizes linguistic features, and state-of-the-art transformer-based models with contextualized embeddings from a pre-trained encoder. Empirical results demonstrate improved performance when models integrate statement and context, outperforming those relying on the statement text alone. Our best model, which also integrates evidence, achieves a 48.3 F1 Score, suggesting that HeTrue is a challenging benchmark, calling for further work on this task.", }
In the age of information overload, it is more important than ever to discern fact from fiction. From the internet to traditional media, we are constantly confronted with a deluge of information, much of which comes from politicians and other public figures who wield significant influence. In this paper, we introduce HeTrue: a new, publicly available dataset for evaluating the credibility of statements made by Israeli public figures and politicians. This dataset consists of 1021 statements, manually annotated by Israeli professional journalists, for their credibility status. Using this corpus, we set out to assess whether the credibility of statements can be predicted based on the text alone. To establish a baseline, we compare text-only methods with others using additional data like metadata, context, and evidence. Furthermore, we develop several credibility assessment models, including a feature-based model that utilizes linguistic features, and state-of-the-art transformer-based models with contextualized embeddings from a pre-trained encoder. Empirical results demonstrate improved performance when models integrate statement and context, outperforming those relying on the statement text alone. Our best model, which also integrates evidence, achieves a 48.3 F1 Score, suggesting that HeTrue is a challenging benchmark, calling for further work on this task.
[ "Hagag, Ben", "Tsarfaty, Reut" ]
The Truth, The Whole Truth, and Nothing but the Truth: A New Benchmark Dataset for Hebrew Text Credibility Assessment
findings-emnlp.251
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.252.bib
https://aclanthology.org/2023.findings-emnlp.252/
@inproceedings{kumar-etal-2023-indisocialft, title = "{I}ndi{S}ocial{FT}: Multilingual Word Representation for {I}ndian languages in code-mixed environment", author = "Kumar, Saurabh and Sanasam, Ranbir and Nandi, Sukumar", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.252", doi = "10.18653/v1/2023.findings-emnlp.252", pages = "3866--3871", abstract = "The increasing number of Indian language users on the internet necessitates the development of Indian language technologies. In response to this demand, our paper presents a generalized representation vector for diverse text characteristics, including native scripts, transliterated text, multilingual, code-mixed, and social media-related attributes. We gather text from both social media and well-formed sources and utilize the FastText model to create the {``}IndiSocialFT{''} embedding. Through intrinsic and extrinsic evaluation methods, we compare IndiSocialFT with three popular pretrained embeddings trained over Indian languages. Our findings show that the proposed embedding surpasses the baselines in most cases and languages, demonstrating its suitability for various NLP applications.", }
The increasing number of Indian language users on the internet necessitates the development of Indian language technologies. In response to this demand, our paper presents a generalized representation vector for diverse text characteristics, including native scripts, transliterated text, multilingual, code-mixed, and social media-related attributes. We gather text from both social media and well-formed sources and utilize the FastText model to create the {``}IndiSocialFT{''} embedding. Through intrinsic and extrinsic evaluation methods, we compare IndiSocialFT with three popular pretrained embeddings trained over Indian languages. Our findings show that the proposed embedding surpasses the baselines in most cases and languages, demonstrating its suitability for various NLP applications.
[ "Kumar, Saurabh", "Sanasam, Ranbir", "N", "i, Sukumar" ]
IndiSocialFT: Multilingual Word Representation for Indian languages in code-mixed environment
findings-emnlp.252
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.253.bib
https://aclanthology.org/2023.findings-emnlp.253/
@inproceedings{wang-etal-2023-adaptive, title = "Adaptive Hinge Balance Loss for Document-Level Relation Extraction", author = "Wang, Jize and Le, Xinyi and Peng, Xiaodi and Chen, Cailian", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.253", doi = "10.18653/v1/2023.findings-emnlp.253", pages = "3872--3878", abstract = "Document-Level Relation Extraction aims at predicting relations between entities from multiple sentences. A common practice is to select multi-label classification thresholds to decide whether a relation exists between an entity pair. However, in the document-level task, most entity pairs do not express any relations, resulting in a highly imbalanced distribution between positive and negative classes. We argue that the imbalance problem affects threshold selection and may lead to incorrect {``}no-relation{''} predictions. In this paper, we propose to down-weight the easy negatives by utilizing a distance between the classification threshold and the predicted score of each relation. Our novel Adaptive Hinge Balance Loss measures the difficulty of each relation class with the distance, putting more focus on hard, misclassified relations, i.e. the minority positive relations. Experiment results on Re-DocRED demonstrate the superiority of our approach over other balancing methods. Source codes are available at https://github.com/Jize-W/HingeABL.", }
Document-Level Relation Extraction aims at predicting relations between entities from multiple sentences. A common practice is to select multi-label classification thresholds to decide whether a relation exists between an entity pair. However, in the document-level task, most entity pairs do not express any relations, resulting in a highly imbalanced distribution between positive and negative classes. We argue that the imbalance problem affects threshold selection and may lead to incorrect {``}no-relation{''} predictions. In this paper, we propose to down-weight the easy negatives by utilizing a distance between the classification threshold and the predicted score of each relation. Our novel Adaptive Hinge Balance Loss measures the difficulty of each relation class with the distance, putting more focus on hard, misclassified relations, i.e. the minority positive relations. Experiment results on Re-DocRED demonstrate the superiority of our approach over other balancing methods. Source codes are available at https://github.com/Jize-W/HingeABL.
[ "Wang, Jize", "Le, Xinyi", "Peng, Xiaodi", "Chen, Cailian" ]
Adaptive Hinge Balance Loss for Document-Level Relation Extraction
findings-emnlp.253
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.254.bib
https://aclanthology.org/2023.findings-emnlp.254/
@inproceedings{li-etal-2023-answer, title = "Answer-state Recurrent Relational Network ({A}s{RRN}) for Constructed Response Assessment and Feedback Grouping", author = "Li, Zhaohui and Lloyd, Susan and Beckman, Matthew and Passonneau, Rebecca", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.254", doi = "10.18653/v1/2023.findings-emnlp.254", pages = "3879--3891", abstract = "STEM educators must trade off the ease of assessing selected response (SR) questions, like multiple choice, with constructed response (CR) questions, where students articulate their own reasoning. Our work addresses a CR type new to NLP but common in college STEM, consisting of multiple questions per context. To relate the context, the questions, the reference responses, and students{'} answers, we developed an Answer-state Recurrent Relational Network (AsRRN). In recurrent time-steps, relation vectors are learned for specific dependencies in a computational graph, where the nodes encode the distinct types of text input. AsRRN incorporates contrastive loss for better representation learning, which improves performance and supports student feedback. AsRRN was developed on a new dataset of 6,532 student responses to three, two-part CR questions. AsRRN outperforms classifiers based on LLMs, a previous relational network for CR questions, and few-shot learning with GPT-3.5. Ablation studies show the distinct contributions of AsRRN{'}s dependency structure, the number of time steps in the recurrence, and the contrastive loss.", }
STEM educators must trade off the ease of assessing selected response (SR) questions, like multiple choice, with constructed response (CR) questions, where students articulate their own reasoning. Our work addresses a CR type new to NLP but common in college STEM, consisting of multiple questions per context. To relate the context, the questions, the reference responses, and students{'} answers, we developed an Answer-state Recurrent Relational Network (AsRRN). In recurrent time-steps, relation vectors are learned for specific dependencies in a computational graph, where the nodes encode the distinct types of text input. AsRRN incorporates contrastive loss for better representation learning, which improves performance and supports student feedback. AsRRN was developed on a new dataset of 6,532 student responses to three, two-part CR questions. AsRRN outperforms classifiers based on LLMs, a previous relational network for CR questions, and few-shot learning with GPT-3.5. Ablation studies show the distinct contributions of AsRRN{'}s dependency structure, the number of time steps in the recurrence, and the contrastive loss.
[ "Li, Zhaohui", "Lloyd, Susan", "Beckman, Matthew", "Passonneau, Rebecca" ]
Answer-state Recurrent Relational Network (AsRRN) for Constructed Response Assessment and Feedback Grouping
findings-emnlp.254
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.255.bib
https://aclanthology.org/2023.findings-emnlp.255/
@inproceedings{xu-etal-2023-low, title = "Low-Resource Comparative Opinion Quintuple Extraction by Data Augmentation with Prompting", author = "Xu, Qingting and Hong, Yu and Zhao, Fubang and Song, Kaisong and Kang, Yangyang and Chen, Jiaxiang and Zhou, Guodong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.255", doi = "10.18653/v1/2023.findings-emnlp.255", pages = "3892--3897", abstract = "Comparative Opinion Quintuple Extraction (COQE) aims to predict comparative opinion quintuples from comparative sentences. These quintuples include subject, object, shareable aspect, comparative opinion, and preference. The existing pipeline-based COQE method fails in error propagation. In addition, the complexity and insufficient amounts of annotated data hinder the performance of COQE models. In this paper, we introduce a novel approach called low-resource comparative opinion quintuple extraction by Data Augmentation with Prompting (DAP). Firstly, we present an end-to-end model architecture better suited to the data augmentation method from triplets to quintuples and can effectively avoid error propagation. Additionally, we introduce a data-centric augmentation approach that leverages the robust generative abilities of ChatGPT and integrates transfer learning techniques. Experimental results over three datasets (Camera, Car, Ele) demonstrate that our approach yields substantial improvements and achieves state-of-the-art results. The source code and data are publicly released at: https://github.com/qtxu-nlp/COQE-DAP.", }
Comparative Opinion Quintuple Extraction (COQE) aims to predict comparative opinion quintuples from comparative sentences. These quintuples include subject, object, shareable aspect, comparative opinion, and preference. The existing pipeline-based COQE method fails in error propagation. In addition, the complexity and insufficient amounts of annotated data hinder the performance of COQE models. In this paper, we introduce a novel approach called low-resource comparative opinion quintuple extraction by Data Augmentation with Prompting (DAP). Firstly, we present an end-to-end model architecture better suited to the data augmentation method from triplets to quintuples and can effectively avoid error propagation. Additionally, we introduce a data-centric augmentation approach that leverages the robust generative abilities of ChatGPT and integrates transfer learning techniques. Experimental results over three datasets (Camera, Car, Ele) demonstrate that our approach yields substantial improvements and achieves state-of-the-art results. The source code and data are publicly released at: https://github.com/qtxu-nlp/COQE-DAP.
[ "Xu, Qingting", "Hong, Yu", "Zhao, Fubang", "Song, Kaisong", "Kang, Yangyang", "Chen, Jiaxiang", "Zhou, Guodong" ]
Low-Resource Comparative Opinion Quintuple Extraction by Data Augmentation with Prompting
findings-emnlp.255
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.256.bib
https://aclanthology.org/2023.findings-emnlp.256/
@inproceedings{yang-etal-2023-new-benchmark, title = "A New Benchmark and Reverse Validation Method for Passage-level Hallucination Detection", author = "Yang, Shiping and Sun, Renliang and Wan, Xiaojun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.256", doi = "10.18653/v1/2023.findings-emnlp.256", pages = "3898--3908", abstract = "Large Language Models (LLMs) have shown their ability to collaborate effectively with humans in real-world scenarios. However, LLMs are apt to generate hallucinations, i.e., makeup incorrect text and unverified information, which can cause significant damage when deployed for mission-critical tasks. In this paper, we propose a self-check approach based on reverse validation to detect factual errors automatically in a zero-resource fashion. To facilitate future studies and assess different methods, we construct a hallucination detection benchmark named PHD, which is generated by ChatGPT and annotated by human annotators. Contrasting previous studies of zero-resource hallucination detection, our method and benchmark concentrate on passage-level detection instead of sentence-level. We empirically evaluate our method and existing zero-resource detection methods on two datasets. The experimental results demonstrate that the proposed method considerably outperforms the baselines while costing fewer tokens and less time. Furthermore, we manually analyze some hallucination cases that LLM failed to capture, revealing the shared limitation of zero-resource methods.", }
Large Language Models (LLMs) have shown their ability to collaborate effectively with humans in real-world scenarios. However, LLMs are apt to generate hallucinations, i.e., makeup incorrect text and unverified information, which can cause significant damage when deployed for mission-critical tasks. In this paper, we propose a self-check approach based on reverse validation to detect factual errors automatically in a zero-resource fashion. To facilitate future studies and assess different methods, we construct a hallucination detection benchmark named PHD, which is generated by ChatGPT and annotated by human annotators. Contrasting previous studies of zero-resource hallucination detection, our method and benchmark concentrate on passage-level detection instead of sentence-level. We empirically evaluate our method and existing zero-resource detection methods on two datasets. The experimental results demonstrate that the proposed method considerably outperforms the baselines while costing fewer tokens and less time. Furthermore, we manually analyze some hallucination cases that LLM failed to capture, revealing the shared limitation of zero-resource methods.
[ "Yang, Shiping", "Sun, Renliang", "Wan, Xiaojun" ]
A New Benchmark and Reverse Validation Method for Passage-level Hallucination Detection
findings-emnlp.256
2310.06498
[ "https://github.com/maybenotime/phd" ]
https://huggingface.co/papers/2310.06498
1
1
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.257.bib
https://aclanthology.org/2023.findings-emnlp.257/
@inproceedings{xia-etal-2023-speculative, title = "Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation", author = "Xia, Heming and Ge, Tao and Wang, Peiyi and Chen, Si-Qing and Wei, Furu and Sui, Zhifang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.257", doi = "10.18653/v1/2023.findings-emnlp.257", pages = "3909--3925", abstract = "We propose Speculative Decoding (SpecDec), for the first time ever, to formally study exploiting the idea of speculative execution to accelerate autoregressive (AR) decoding. Speculative Decoding has two innovations: Spec-Drafter {--} an independent model specially optimized for efficient and accurate drafting {--} and Spec-Verification {--} a reliable method for verifying the drafted tokens efficiently in the decoding paradigm. Experimental results on various seq2seq tasks including machine translation and abstractive summarization show our approach can achieve around 5x speedup for the popular Transformer architectures with comparable generation quality to beam search decoding, refreshing the impression that the draft-then-verify paradigm introduces only 1.4x{\textasciitilde}2x speedup. In addition to the remarkable speedup, we also demonstrate 3 additional advantages of SpecDec, revealing its practical value for accelerating generative models in real-world applications. Our models and codes are available at https://github.com/hemingkx/SpecDec.", }
We propose Speculative Decoding (SpecDec), for the first time ever, to formally study exploiting the idea of speculative execution to accelerate autoregressive (AR) decoding. Speculative Decoding has two innovations: Spec-Drafter {--} an independent model specially optimized for efficient and accurate drafting {--} and Spec-Verification {--} a reliable method for verifying the drafted tokens efficiently in the decoding paradigm. Experimental results on various seq2seq tasks including machine translation and abstractive summarization show our approach can achieve around 5x speedup for the popular Transformer architectures with comparable generation quality to beam search decoding, refreshing the impression that the draft-then-verify paradigm introduces only 1.4x{\textasciitilde}2x speedup. In addition to the remarkable speedup, we also demonstrate 3 additional advantages of SpecDec, revealing its practical value for accelerating generative models in real-world applications. Our models and codes are available at https://github.com/hemingkx/SpecDec.
[ "Xia, Heming", "Ge, Tao", "Wang, Peiyi", "Chen, Si-Qing", "Wei, Furu", "Sui, Zhifang" ]
Speculative Decoding: Exploiting Speculative Execution for Accelerating Seq2seq Generation
findings-emnlp.257
2203.16487
[ "https://github.com/hemingkx/gad" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.258.bib
https://aclanthology.org/2023.findings-emnlp.258/
@inproceedings{wang-etal-2023-app, title = "{APP}: Adaptive Prototypical Pseudo-Labeling for Few-shot {OOD} Detection", author = "Wang, Pei and He, Keqing and Mou, Yutao and Song, Xiaoshuai and Wu, Yanan and Wang, Jingang and Xian, Yunsen and Cai, Xunliang and Xu, Weiran", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.258", doi = "10.18653/v1/2023.findings-emnlp.258", pages = "3926--3939", abstract = "Detecting out-of-domain (OOD) intents from user queries is essential for a task-oriented dialogue system. Previous OOD detection studies generally work on the assumption that plenty of labeled IND intents exist. In this paper, we focus on a more practical few-shot OOD setting where there are only a few labeled IND data and massive unlabeled mixed data that may belong to IND or OOD. The new scenario carries two key challenges: learning discriminative representations using limited IND data and leveraging unlabeled mixed data. Therefore, we propose an adaptive prototypical pseudo-labeling(APP) method for few-shot OOD detection, including a prototypical OOD detection framework (ProtoOOD) to facilitate low-resourceOOD detection using limited IND data, and an adaptive pseudo-labeling method to produce high-quality pseudo OOD and IND labels. Extensive experiments and analysis demonstrate the effectiveness of our method for few-shot OOD detection.", }
Detecting out-of-domain (OOD) intents from user queries is essential for a task-oriented dialogue system. Previous OOD detection studies generally work on the assumption that plenty of labeled IND intents exist. In this paper, we focus on a more practical few-shot OOD setting where there are only a few labeled IND data and massive unlabeled mixed data that may belong to IND or OOD. The new scenario carries two key challenges: learning discriminative representations using limited IND data and leveraging unlabeled mixed data. Therefore, we propose an adaptive prototypical pseudo-labeling(APP) method for few-shot OOD detection, including a prototypical OOD detection framework (ProtoOOD) to facilitate low-resourceOOD detection using limited IND data, and an adaptive pseudo-labeling method to produce high-quality pseudo OOD and IND labels. Extensive experiments and analysis demonstrate the effectiveness of our method for few-shot OOD detection.
[ "Wang, Pei", "He, Keqing", "Mou, Yutao", "Song, Xiaoshuai", "Wu, Yanan", "Wang, Jingang", "Xian, Yunsen", "Cai, Xunliang", "Xu, Weiran" ]
APP: Adaptive Prototypical Pseudo-Labeling for Few-shot OOD Detection
findings-emnlp.258
2310.13380
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.259.bib
https://aclanthology.org/2023.findings-emnlp.259/
@inproceedings{zhang-etal-2023-2iner, title = "2{INER}: Instructive and In-Context Learning on Few-Shot Named Entity Recognition", author = "Zhang, Jiasheng and Liu, Xikai and Lai, Xinyi and Gao, Yan and Wang, Shusen and Hu, Yao and Lin, Yiqing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.259", doi = "10.18653/v1/2023.findings-emnlp.259", pages = "3940--3951", abstract = "Prompt-based learning has emerged as a powerful technique in natural language processing (NLP) due to its ability to leverage pre-training knowledge for downstream few-shot tasks. In this paper, we propose 2INER, a novel text-to-text framework for Few-Shot Named Entity Recognition (NER) tasks. Our approach employs instruction finetuning based on InstructionNER to enable the model to effectively comprehend and process task-specific instructions, including both main and auxiliary tasks. We also introduce a new auxiliary task, called Type Extracting, to enhance the model{'}s understanding of entity types in the overall semantic context of a sentence. To facilitate in-context learning, we concatenate examples to the input, enabling the model to learn from additional contextual information. Experimental results on four datasets demonstrate that our approach outperforms existing Few-Shot NER methods and remains competitive with state-of-the-art standard NER algorithms.", }
Prompt-based learning has emerged as a powerful technique in natural language processing (NLP) due to its ability to leverage pre-training knowledge for downstream few-shot tasks. In this paper, we propose 2INER, a novel text-to-text framework for Few-Shot Named Entity Recognition (NER) tasks. Our approach employs instruction finetuning based on InstructionNER to enable the model to effectively comprehend and process task-specific instructions, including both main and auxiliary tasks. We also introduce a new auxiliary task, called Type Extracting, to enhance the model{'}s understanding of entity types in the overall semantic context of a sentence. To facilitate in-context learning, we concatenate examples to the input, enabling the model to learn from additional contextual information. Experimental results on four datasets demonstrate that our approach outperforms existing Few-Shot NER methods and remains competitive with state-of-the-art standard NER algorithms.
[ "Zhang, Jiasheng", "Liu, Xikai", "Lai, Xinyi", "Gao, Yan", "Wang, Shusen", "Hu, Yao", "Lin, Yiqing" ]
2INER: Instructive and In-Context Learning on Few-Shot Named Entity Recognition
findings-emnlp.259
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.260.bib
https://aclanthology.org/2023.findings-emnlp.260/
@inproceedings{wang-etal-2023-generative-emotion, title = "Generative Emotion Cause Triplet Extraction in Conversations with Commonsense Knowledge", author = "Wang, Fanfan and Yu, Jianfei and Xia, Rui", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.260", doi = "10.18653/v1/2023.findings-emnlp.260", pages = "3952--3963", abstract = "Emotion Cause Triplet Extraction in Conversations (ECTEC) aims to simultaneously extract emotion utterances, emotion categories, and cause utterances from conversations. However, existing studies mainly decompose the ECTEC task into multiple subtasks and solve them in a pipeline manner. Moreover, since conversations tend to contain many informal and implicit expressions, it often requires external knowledge and reasoning-based inference to accurately identify emotional and causal clues implicitly mentioned in the context, which are ignored by previous work. To address these limitations, in this paper, we propose a commonSense knowledge-enHanced generAtive fRameworK named SHARK, which formulates the ECTEC task as an index generation problem and generates the emotion-cause-category triplets in an end-to-end manner with a sequence-to-sequence model. Furthermore, we propose to incorporate both retrieved and generated commonsense knowledge into the generative model via a dual-view gate mechanism and a graph attention layer. Experimental results show that our SHARK model consistently outperforms several competitive systems on two benchmark datasets. Our source codes are publicly released at https://github.com/NUSTM/SHARK.", }
Emotion Cause Triplet Extraction in Conversations (ECTEC) aims to simultaneously extract emotion utterances, emotion categories, and cause utterances from conversations. However, existing studies mainly decompose the ECTEC task into multiple subtasks and solve them in a pipeline manner. Moreover, since conversations tend to contain many informal and implicit expressions, it often requires external knowledge and reasoning-based inference to accurately identify emotional and causal clues implicitly mentioned in the context, which are ignored by previous work. To address these limitations, in this paper, we propose a commonSense knowledge-enHanced generAtive fRameworK named SHARK, which formulates the ECTEC task as an index generation problem and generates the emotion-cause-category triplets in an end-to-end manner with a sequence-to-sequence model. Furthermore, we propose to incorporate both retrieved and generated commonsense knowledge into the generative model via a dual-view gate mechanism and a graph attention layer. Experimental results show that our SHARK model consistently outperforms several competitive systems on two benchmark datasets. Our source codes are publicly released at https://github.com/NUSTM/SHARK.
[ "Wang, Fanfan", "Yu, Jianfei", "Xia, Rui" ]
Generative Emotion Cause Triplet Extraction in Conversations with Commonsense Knowledge
findings-emnlp.260
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.261.bib
https://aclanthology.org/2023.findings-emnlp.261/
@inproceedings{xie-etal-2023-proto, title = "Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models", author = "Xie, Sean and Vosoughi, Soroush and Hassanpour, Saeed", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.261", doi = "10.18653/v1/2023.findings-emnlp.261", pages = "3964--3979", abstract = "Large Language Models (LLMs) have significantly advanced the field of Natural Language Processing (NLP), but their lack of interpretability has been a major concern. Current methods for interpreting LLMs are post hoc, applied after inference time, and have limitations such as their focus on low-level features and lack of explainability at higher-level text units. In this work, we introduce proto-lm, a prototypical network-based white-box framework that allows LLMs to learn immediately interpretable embeddings during the fine-tuning stage while maintaining competitive performance. Our method{'}s applicability and interpretability are demonstrated through experiments on a wide range of NLP tasks, and our results indicate a new possibility of creating interpretable models without sacrificing performance. This novel approach to interpretability in LLMs can pave the way for more interpretable models without the need to sacrifice performance. We release our code at https://github.com/yx131/proto-lm.", }
Large Language Models (LLMs) have significantly advanced the field of Natural Language Processing (NLP), but their lack of interpretability has been a major concern. Current methods for interpreting LLMs are post hoc, applied after inference time, and have limitations such as their focus on low-level features and lack of explainability at higher-level text units. In this work, we introduce proto-lm, a prototypical network-based white-box framework that allows LLMs to learn immediately interpretable embeddings during the fine-tuning stage while maintaining competitive performance. Our method{'}s applicability and interpretability are demonstrated through experiments on a wide range of NLP tasks, and our results indicate a new possibility of creating interpretable models without sacrificing performance. This novel approach to interpretability in LLMs can pave the way for more interpretable models without the need to sacrifice performance. We release our code at https://github.com/yx131/proto-lm.
[ "Xie, Sean", "Vosoughi, Soroush", "Hassanpour, Saeed" ]
Proto-lm: A Prototypical Network-Based Framework for Built-in Interpretability in Large Language Models
findings-emnlp.261
2311.01732
[ "https://github.com/yx131/proto-lm" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.262.bib
https://aclanthology.org/2023.findings-emnlp.262/
@inproceedings{wen-etal-2023-grove, title = "{GROVE}: A Retrieval-augmented Complex Story Generation Framework with A Forest of Evidence", author = "Wen, Zhihua and Tian, Zhiliang and Wu, Wei and Yang, Yuxin and Shi, Yanqi and Huang, Zhen and Li, Dongsheng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.262", doi = "10.18653/v1/2023.findings-emnlp.262", pages = "3980--3998", abstract = "Conditional story generation is significant in human-machine interaction, particularly in producing stories with complex plots. While Large language models (LLMs) perform well on multiple NLP tasks, including story generation, it is challenging to generate stories with both complex and creative plots. Existing methods often rely on detailed prompts to guide LLMs to meet target conditions, which inadvertently restrict the creative potential of the generated stories. We argue that leveraging information from exemplary human-written stories facilitates generating more diverse plotlines. Delving deeper into story details helps build complex and credible plots. In this paper, we propose a retrieval-auGmented stoRy generation framework with a fOrest of eVidEnce (GROVE) to enhance stories{'} complexity. We build a retrieval repository for target conditions to produce few-shot examples to prompt LLMs. Additionally, we design an {``}asking-why{''} prompting scheme that extracts a forest of evidence, providing compensation for the ambiguities that may occur in the generated story. This iterative process uncovers underlying story backgrounds. Finally, we select the most fitting chains of evidence from the evidence forest and integrate them into the generated story, thereby enhancing the narrative{'}s complexity and credibility. Experimental results and numerous examples verify the effectiveness of our method.", }
Conditional story generation is significant in human-machine interaction, particularly in producing stories with complex plots. While Large language models (LLMs) perform well on multiple NLP tasks, including story generation, it is challenging to generate stories with both complex and creative plots. Existing methods often rely on detailed prompts to guide LLMs to meet target conditions, which inadvertently restrict the creative potential of the generated stories. We argue that leveraging information from exemplary human-written stories facilitates generating more diverse plotlines. Delving deeper into story details helps build complex and credible plots. In this paper, we propose a retrieval-auGmented stoRy generation framework with a fOrest of eVidEnce (GROVE) to enhance stories{'} complexity. We build a retrieval repository for target conditions to produce few-shot examples to prompt LLMs. Additionally, we design an {``}asking-why{''} prompting scheme that extracts a forest of evidence, providing compensation for the ambiguities that may occur in the generated story. This iterative process uncovers underlying story backgrounds. Finally, we select the most fitting chains of evidence from the evidence forest and integrate them into the generated story, thereby enhancing the narrative{'}s complexity and credibility. Experimental results and numerous examples verify the effectiveness of our method.
[ "Wen, Zhihua", "Tian, Zhiliang", "Wu, Wei", "Yang, Yuxin", "Shi, Yanqi", "Huang, Zhen", "Li, Dongsheng" ]
GROVE: A Retrieval-augmented Complex Story Generation Framework with A Forest of Evidence
findings-emnlp.262
2310.05388
[ "" ]
https://huggingface.co/papers/2310.05388
0
4
0
7
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.263.bib
https://aclanthology.org/2023.findings-emnlp.263/
@inproceedings{ma-etal-2023-kapalm, title = "{KAPALM}: Knowledge gr{AP}h enh{A}nced Language Models for Fake News Detection", author = "Ma, Jing and Chen, Chen and Hou, Chunyan and Yuan, Xiaojie", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.263", doi = "10.18653/v1/2023.findings-emnlp.263", pages = "3999--4009", abstract = "Social media has not only facilitated news consumption, but also led to the wide spread of fake news. Because news articles in social media is usually condensed and full of knowledge entities, existing methods of fake news detection use external entity knowledge. However, majority of these methods focus on news entity information and ignore the structured knowledge among news entities. To address this issue, in this work, we propose a Knowledge grAPh enhAnced Language Model (KAPALM) which is a novel model that fuses coarse- and fine-grained representations of entity knowledge from Knowledge Graphs (KGs). Firstly, we identify entities in news content and link them to entities in KGs. Then, a subgraph of KGs is extracted to provide structured knowledge of entities in KGs and fed into a graph neural network to obtain the coarse-grained knowledge representation. This subgraph is pruned to provide fine-grained knowledge and fed into the attentive graph and graph pooling layer. Finally, we integrate the coarse- and fine-grained entity knowledge representations with the textual representation for fake news detection. The experimental results on two benchmark datasets show that our method is superior to state-of-the-art baselines. In addition, it is competitive in the few-shot scenario.", }
Social media has not only facilitated news consumption, but also led to the wide spread of fake news. Because news articles in social media is usually condensed and full of knowledge entities, existing methods of fake news detection use external entity knowledge. However, majority of these methods focus on news entity information and ignore the structured knowledge among news entities. To address this issue, in this work, we propose a Knowledge grAPh enhAnced Language Model (KAPALM) which is a novel model that fuses coarse- and fine-grained representations of entity knowledge from Knowledge Graphs (KGs). Firstly, we identify entities in news content and link them to entities in KGs. Then, a subgraph of KGs is extracted to provide structured knowledge of entities in KGs and fed into a graph neural network to obtain the coarse-grained knowledge representation. This subgraph is pruned to provide fine-grained knowledge and fed into the attentive graph and graph pooling layer. Finally, we integrate the coarse- and fine-grained entity knowledge representations with the textual representation for fake news detection. The experimental results on two benchmark datasets show that our method is superior to state-of-the-art baselines. In addition, it is competitive in the few-shot scenario.
[ "Ma, Jing", "Chen, Chen", "Hou, Chunyan", "Yuan, Xiaojie" ]
KAPALM: Knowledge grAPh enhAnced Language Models for Fake News Detection
findings-emnlp.263
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.264.bib
https://aclanthology.org/2023.findings-emnlp.264/
@inproceedings{murthy-etal-2023-comparing, title = "Comparing the Evaluation and Production of Loophole Behavior in Humans and Large Language Models", author = "Murthy, Sonia and Parece, Kiera and Bridgers, Sophie and Qian, Peng and Ullman, Tomer", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.264", doi = "10.18653/v1/2023.findings-emnlp.264", pages = "4010--4025", abstract = "In law, lore, and everyday life, loopholes are commonplace. When people exploit a loophole, they understand the intended meaning or goal of another person, but choose to go with a different interpretation. Past and current AI research has shown that artificial intelligence engages in what seems superficially like the exploitation of loopholes, but this is likely anthropomorphization. It remains unclear to what extent current models, especially Large Language Models (LLMs), capture the pragmatic understanding required for engaging in loopholes. We examined the performance of LLMs on two metrics developed for studying loophole behavior in humans: evaluation (ratings of trouble, upset, and humor), and generation (coming up with new loopholes in a given context). We conducted a fine-grained comparison of state-of-the-art LLMs to humans, and find that while many of the models rate loophole behaviors as resulting in less trouble and upset than outright non-compliance (in line with adults), they struggle to recognize the humor in the creative exploitation of loopholes in the way that humans do. Furthermore, only two of the models, GPT 3 and 3.5, are capable of generating loopholes of their own, with GPT3.5 performing closest to the human baseline.", }
In law, lore, and everyday life, loopholes are commonplace. When people exploit a loophole, they understand the intended meaning or goal of another person, but choose to go with a different interpretation. Past and current AI research has shown that artificial intelligence engages in what seems superficially like the exploitation of loopholes, but this is likely anthropomorphization. It remains unclear to what extent current models, especially Large Language Models (LLMs), capture the pragmatic understanding required for engaging in loopholes. We examined the performance of LLMs on two metrics developed for studying loophole behavior in humans: evaluation (ratings of trouble, upset, and humor), and generation (coming up with new loopholes in a given context). We conducted a fine-grained comparison of state-of-the-art LLMs to humans, and find that while many of the models rate loophole behaviors as resulting in less trouble and upset than outright non-compliance (in line with adults), they struggle to recognize the humor in the creative exploitation of loopholes in the way that humans do. Furthermore, only two of the models, GPT 3 and 3.5, are capable of generating loopholes of their own, with GPT3.5 performing closest to the human baseline.
[ "Murthy, Sonia", "Parece, Kiera", "Bridgers, Sophie", "Qian, Peng", "Ullman, Tomer" ]
Comparing the Evaluation and Production of Loophole Behavior in Humans and Large Language Models
findings-emnlp.264
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.265.bib
https://aclanthology.org/2023.findings-emnlp.265/
@inproceedings{payan-etal-2023-instructexcel, title = "{I}nstruct{E}xcel: A Benchmark for Natural Language Instruction in Excel", author = "Payan, Justin and Mishra, Swaroop and Singh, Mukul and Negreanu, Carina and Poelitz, Christian and Baral, Chitta and Roy, Subhro and Chakravarthy, Rasika and Van Durme, Benjamin and Nouri, Elnaz", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.265", doi = "10.18653/v1/2023.findings-emnlp.265", pages = "4026--4043", abstract = "With the evolution of Large Language Models (LLMs) we can solve increasingly more complex NLP tasks across various domains, including spreadsheets. This work investigates whether LLMs can generate code (Excel OfficeScripts, a TypeScript API for executing many tasks in Excel) that solves Excel specific tasks provided via natural language user instructions. To do so we introduce a new large-scale benchmark, InstructExcel, created by leveraging the {`}Automate{'} feature in Excel to automatically generate OfficeScripts from users{'} actions. Our benchmark includes over 10k samples covering 170+ Excel operations across 2,000 publicly available Excel spreadsheets. Experiments across various zero-shot and few-shot settings show that InstructExcel is a hard benchmark for state of the art models like GPT-4. We observe that (1) using GPT-4 over GPT-3.5, (2) providing more in-context examples, and (3) dynamic prompting can help improve performance on this benchmark.", }
With the evolution of Large Language Models (LLMs) we can solve increasingly more complex NLP tasks across various domains, including spreadsheets. This work investigates whether LLMs can generate code (Excel OfficeScripts, a TypeScript API for executing many tasks in Excel) that solves Excel specific tasks provided via natural language user instructions. To do so we introduce a new large-scale benchmark, InstructExcel, created by leveraging the {`}Automate{'} feature in Excel to automatically generate OfficeScripts from users{'} actions. Our benchmark includes over 10k samples covering 170+ Excel operations across 2,000 publicly available Excel spreadsheets. Experiments across various zero-shot and few-shot settings show that InstructExcel is a hard benchmark for state of the art models like GPT-4. We observe that (1) using GPT-4 over GPT-3.5, (2) providing more in-context examples, and (3) dynamic prompting can help improve performance on this benchmark.
[ "Payan, Justin", "Mishra, Swaroop", "Singh, Mukul", "Negreanu, Carina", "Poelitz, Christian", "Baral, Chitta", "Roy, Subhro", "Chakravarthy, Rasika", "Van Durme, Benjamin", "Nouri, Elnaz" ]
InstructExcel: A Benchmark for Natural Language Instruction in Excel
findings-emnlp.265
2310.14495
[ "" ]
https://huggingface.co/papers/2310.14495
3
1
2
10
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.266.bib
https://aclanthology.org/2023.findings-emnlp.266/
@inproceedings{zhao-etal-2023-hallucination, title = "Hallucination Detection for Grounded Instruction Generation", author = "Zhao, Lingjun and Nguyen, Khanh and Daum{\'e} III, Hal", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.266", doi = "10.18653/v1/2023.findings-emnlp.266", pages = "4044--4053", abstract = "We investigate the problem of generating instructions to guide humans to navigate in simulated residential environments. A major issue with current models is hallucination: they generate references to actions or objects that are inconsistent with what a human follower would perform or encounter along the described path. We develop a model that detects these hallucinated references by adopting a model pre-trained on a large corpus of image-text pairs, and fine-tuning it with a contrastive loss that separates correct instructions from instructions containing synthesized hallucinations. Our final model outperforms several baselines, including using word probability estimated by the instruction-generation model, and supervised models based on LSTM and Transformer.", }
We investigate the problem of generating instructions to guide humans to navigate in simulated residential environments. A major issue with current models is hallucination: they generate references to actions or objects that are inconsistent with what a human follower would perform or encounter along the described path. We develop a model that detects these hallucinated references by adopting a model pre-trained on a large corpus of image-text pairs, and fine-tuning it with a contrastive loss that separates correct instructions from instructions containing synthesized hallucinations. Our final model outperforms several baselines, including using word probability estimated by the instruction-generation model, and supervised models based on LSTM and Transformer.
[ "Zhao, Lingjun", "Nguyen, Khanh", "Daum{\\'e} III, Hal" ]
Hallucination Detection for Grounded Instruction Generation
findings-emnlp.266
2310.15319
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.267.bib
https://aclanthology.org/2023.findings-emnlp.267/
@inproceedings{peskine-etal-2023-definitions, title = "Definitions Matter: Guiding {GPT} for Multi-label Classification", author = "Peskine, Youri and Koren{\v{c}}i{\'c}, Damir and Grubisic, Ivan and Papotti, Paolo and Troncy, Raphael and Rosso, Paolo", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.267", doi = "10.18653/v1/2023.findings-emnlp.267", pages = "4054--4063", abstract = "Large language models have recently risen in popularity due to their ability to perform many natural language tasks without requiring any fine-tuning. In this work, we focus on two novel ideas: (1) generating definitions from examples and using them for zero-shot classification, and (2) investigating how an LLM makes use of the definitions. We thoroughly analyze the performance of GPT-3 model for fine-grained multi-label conspiracy theory classification of tweets using zero-shot labeling. In doing so, we asses how to improve the labeling by providing minimal but meaningful context in the form of the definitions of the labels. We compare descriptive noun phrases, human-crafted definitions, introduce a new method to help the model generate definitions from examples, and propose a method to evaluate GPT-3{'}s understanding of the definitions. We demonstrate that improving definitions of class labels has a direct consequence on the downstream classification results.", }
Large language models have recently risen in popularity due to their ability to perform many natural language tasks without requiring any fine-tuning. In this work, we focus on two novel ideas: (1) generating definitions from examples and using them for zero-shot classification, and (2) investigating how an LLM makes use of the definitions. We thoroughly analyze the performance of GPT-3 model for fine-grained multi-label conspiracy theory classification of tweets using zero-shot labeling. In doing so, we asses how to improve the labeling by providing minimal but meaningful context in the form of the definitions of the labels. We compare descriptive noun phrases, human-crafted definitions, introduce a new method to help the model generate definitions from examples, and propose a method to evaluate GPT-3{'}s understanding of the definitions. We demonstrate that improving definitions of class labels has a direct consequence on the downstream classification results.
[ "Peskine, Youri", "Koren{\\v{c}}i{\\'c}, Damir", "Grubisic, Ivan", "Papotti, Paolo", "Troncy, Raphael", "Rosso, Paolo" ]
Definitions Matter: Guiding GPT for Multi-label Classification
findings-emnlp.267
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.268.bib
https://aclanthology.org/2023.findings-emnlp.268/
@inproceedings{xie-etal-2023-echo, title = "{ECH}o: A Visio-Linguistic Dataset for Event Causality Inference via Human-Centric Reasoning", author = "Xie, Yuxi and Li, Guanzhen and Kan, Min-Yen", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.268", doi = "10.18653/v1/2023.findings-emnlp.268", pages = "4064--4085", abstract = "We introduce ECHo (Event Causality Inference via Human-Centric Reasoning), a diagnostic dataset of event causality inference grounded in visio-linguistic social scenarios. ECHo employs real-world human-centric deductive information building on a television crime drama. ECHo requires the Theory-of-Mind (ToM) ability to understand and reason about social interactions based on multimodal information. Using ECHo, we propose a unified Chain-of-Thought (CoT) framework to assess the reasoning capability of current AI systems. Our ToM-enhanced CoT pipeline accommodates various large foundation models in both zero-shot and few-shot visio-linguistic reasoning. We use this framework to scrutinize recent large foundation models such as InstructGPT and MiniGPT-4 on three diagnostic human-centric tasks. Further analysis demonstrates ECHo as a challenging dataset to expose imperfections and inconsistencies in reasoning. Our data and code are publicly available at [https://github.com/YuxiXie/ECHo](https://github.com/YuxiXie/ECHo).", }
We introduce ECHo (Event Causality Inference via Human-Centric Reasoning), a diagnostic dataset of event causality inference grounded in visio-linguistic social scenarios. ECHo employs real-world human-centric deductive information building on a television crime drama. ECHo requires the Theory-of-Mind (ToM) ability to understand and reason about social interactions based on multimodal information. Using ECHo, we propose a unified Chain-of-Thought (CoT) framework to assess the reasoning capability of current AI systems. Our ToM-enhanced CoT pipeline accommodates various large foundation models in both zero-shot and few-shot visio-linguistic reasoning. We use this framework to scrutinize recent large foundation models such as InstructGPT and MiniGPT-4 on three diagnostic human-centric tasks. Further analysis demonstrates ECHo as a challenging dataset to expose imperfections and inconsistencies in reasoning. Our data and code are publicly available at [https://github.com/YuxiXie/ECHo](https://github.com/YuxiXie/ECHo).
[ "Xie, Yuxi", "Li, Guanzhen", "Kan, Min-Yen" ]
ECHo: A Visio-Linguistic Dataset for Event Causality Inference via Human-Centric Reasoning
findings-emnlp.268
2305.14740
[ "https://github.com/yuxixie/echo" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.269.bib
https://aclanthology.org/2023.findings-emnlp.269/
@inproceedings{si-etal-2023-empirical, title = "An Empirical Study of Instruction-tuning Large Language Models in {C}hinese", author = "Si, Qingyi and Wang, Tong and Lin, Zheng and Zhang, Xu and Cao, Yanan and Wang, Weiping", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.269", doi = "10.18653/v1/2023.findings-emnlp.269", pages = "4086--4107", abstract = "The success of ChatGPT validates the potential of large language models (LLMs) in artificial general intelligence (AGI). Subsequently, the release of LLMs has sparked the open-source community{'}s interest in instruction-tuning, which is deemed to accelerate ChatGPT{'}s replication process. However, research on instruction-tuning LLMs in Chinese, the world{'}s most spoken language, is still in its early stages. Therefore, this paper makes an in-depth empirical study of instruction-tuning LLMs in Chinese, which can serve as a cookbook that provides valuable findings for effectively customizing LLMs that can better respond to Chinese instructions. Specifically, we systematically explore the impact of LLM bases, parameter-efficient methods, instruction data types, which are the three most important elements for instruction-tuning. Besides, we also conduct experiment to study the impact of other factors, e.g., chain-of-thought data and human-value alignment. We hope that this empirical study can make a modest contribution to the open Chinese version of ChatGPT. This paper will release a powerful Chinese LLM that is comparable to ChatGLM. The code and data are available at https: //github.com/PhoebusSi/Alpaca-CoT.", }
The success of ChatGPT validates the potential of large language models (LLMs) in artificial general intelligence (AGI). Subsequently, the release of LLMs has sparked the open-source community{'}s interest in instruction-tuning, which is deemed to accelerate ChatGPT{'}s replication process. However, research on instruction-tuning LLMs in Chinese, the world{'}s most spoken language, is still in its early stages. Therefore, this paper makes an in-depth empirical study of instruction-tuning LLMs in Chinese, which can serve as a cookbook that provides valuable findings for effectively customizing LLMs that can better respond to Chinese instructions. Specifically, we systematically explore the impact of LLM bases, parameter-efficient methods, instruction data types, which are the three most important elements for instruction-tuning. Besides, we also conduct experiment to study the impact of other factors, e.g., chain-of-thought data and human-value alignment. We hope that this empirical study can make a modest contribution to the open Chinese version of ChatGPT. This paper will release a powerful Chinese LLM that is comparable to ChatGLM. The code and data are available at https: //github.com/PhoebusSi/Alpaca-CoT.
[ "Si, Qingyi", "Wang, Tong", "Lin, Zheng", "Zhang, Xu", "Cao, Yanan", "Wang, Weiping" ]
An Empirical Study of Instruction-tuning Large Language Models in Chinese
findings-emnlp.269
2310.07328
[ "https://github.com/phoebussi/alpaca-cot" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.270.bib
https://aclanthology.org/2023.findings-emnlp.270/
@inproceedings{patil-etal-2023-debiasing, title = "Debiasing Multimodal Models via Causal Information Minimization", author = "Patil, Vaidehi and Maharana, Adyasha and Bansal, Mohit", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.270", doi = "10.18653/v1/2023.findings-emnlp.270", pages = "4108--4123", abstract = "Most existing debiasing methods for multimodal models, including causal intervention and inference methods, utilize approximate heuristics to represent the biases, such as shallow features from early stages of training or unimodal features for multimodal tasks like VQA, etc., which may not be accurate. In this paper, we study bias arising from confounders in a causal graph for multimodal data, and examine a novel approach that leverages causally-motivated information minimization to learn the confounder representations. Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data. Hence, minimizing the information content of features obtained from a pretrained biased model helps learn the simplest predictive features that capture the underlying data distribution. We treat these features as confounder representations and use them via methods motivated by causal theory to remove bias from models. We find that the learned confounder representations indeed capture dataset biases and the proposed debiasing methods improve out-of-distribution (OOD) performance on multiple multimodal datasets without sacrificing in-distribution performance. Additionally, we introduce a novel metric to quantify the sufficiency of spurious features in models{'} predictions that further demonstrates the effectiveness of our proposed methods.", }
Most existing debiasing methods for multimodal models, including causal intervention and inference methods, utilize approximate heuristics to represent the biases, such as shallow features from early stages of training or unimodal features for multimodal tasks like VQA, etc., which may not be accurate. In this paper, we study bias arising from confounders in a causal graph for multimodal data, and examine a novel approach that leverages causally-motivated information minimization to learn the confounder representations. Robust predictive features contain diverse information that helps a model generalize to out-of-distribution data. Hence, minimizing the information content of features obtained from a pretrained biased model helps learn the simplest predictive features that capture the underlying data distribution. We treat these features as confounder representations and use them via methods motivated by causal theory to remove bias from models. We find that the learned confounder representations indeed capture dataset biases and the proposed debiasing methods improve out-of-distribution (OOD) performance on multiple multimodal datasets without sacrificing in-distribution performance. Additionally, we introduce a novel metric to quantify the sufficiency of spurious features in models{'} predictions that further demonstrates the effectiveness of our proposed methods.
[ "Patil, Vaidehi", "Maharana, Adyasha", "Bansal, Mohit" ]
Debiasing Multimodal Models via Causal Information Minimization
findings-emnlp.270
2311.16941
[ "https://github.com/vaidehi99/causalinfomin" ]
https://huggingface.co/papers/2311.16941
1
1
0
3
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.271.bib
https://aclanthology.org/2023.findings-emnlp.271/
@inproceedings{teodorescu-mohammad-2023-evaluating, title = "Evaluating Emotion Arcs Across Languages: Bridging the Global Divide in Sentiment Analysis", author = "Teodorescu, Daniela and Mohammad, Saif", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.271", doi = "10.18653/v1/2023.findings-emnlp.271", pages = "4124--4137", abstract = "Emotion arcs capture how an individual (or a population) feels over time. They are widely used in industry and research; however, there is little work on evaluating the automatically generated arcs. This is because of the difficulty of establishing the true (gold) emotion arc. Our work, for the first time, systematically and quantitatively evaluates automatically generated emotion arcs. We also compare two common ways of generating emotion arcs: Machine-Learning (ML) models and Lexicon-Only (LexO) methods. By running experiments on 18 diverse datasets in 9 languages, we show that despite being markedly poor at instance level emotion classification, LexO methods are highly accurate at generating emotion arcs when aggregating information from hundreds of instances. We also show, through experiments on six indigenous African languages, as well as Arabic, and Spanish, that automatic translations of English emotion lexicons can be used to generate high-quality emotion arcs in less-resource languages. This opens up avenues for work on emotions in languages from around the world; which is crucial for commerce, public policy, and health research in service of speakers often left behind. Code and resources: https://github.com/dteodore/EmotionArcs", }
Emotion arcs capture how an individual (or a population) feels over time. They are widely used in industry and research; however, there is little work on evaluating the automatically generated arcs. This is because of the difficulty of establishing the true (gold) emotion arc. Our work, for the first time, systematically and quantitatively evaluates automatically generated emotion arcs. We also compare two common ways of generating emotion arcs: Machine-Learning (ML) models and Lexicon-Only (LexO) methods. By running experiments on 18 diverse datasets in 9 languages, we show that despite being markedly poor at instance level emotion classification, LexO methods are highly accurate at generating emotion arcs when aggregating information from hundreds of instances. We also show, through experiments on six indigenous African languages, as well as Arabic, and Spanish, that automatic translations of English emotion lexicons can be used to generate high-quality emotion arcs in less-resource languages. This opens up avenues for work on emotions in languages from around the world; which is crucial for commerce, public policy, and health research in service of speakers often left behind. Code and resources: https://github.com/dteodore/EmotionArcs
[ "Teodorescu, Daniela", "Mohammad, Saif" ]
Evaluating Emotion Arcs Across Languages: Bridging the Global Divide in Sentiment Analysis
findings-emnlp.271
2306.02213
[ "https://github.com/dteodore/emotionarcs" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.272.bib
https://aclanthology.org/2023.findings-emnlp.272/
@inproceedings{li-etal-2023-multi-step, title = "Multi-step Jailbreaking Privacy Attacks on {C}hat{GPT}", author = "Li, Haoran and Guo, Dadi and Fan, Wei and Xu, Mingshi and Huang, Jie and Meng, Fanpu and Song, Yangqiu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.272", doi = "10.18653/v1/2023.findings-emnlp.272", pages = "4138--4153", abstract = "With the rapid progress of large language models (LLMs), many downstream NLP tasks can be well solved given appropriate prompts. Though model developers and researchers work hard on dialog safety to avoid generating harmful content from LLMs, it is still challenging to steer AI-generated content (AIGC) for the human good. As powerful LLMs are devouring existing text data from various domains (e.g., GPT-3 is trained on 45TB texts), it is natural to doubt whether the private information is included in the training data and what privacy threats can these LLMs and their downstream applications bring. In this paper, we study the privacy threats from OpenAI{'}s ChatGPT and the New Bing enhanced by ChatGPT and show that application-integrated LLMs may cause new privacy threats. To this end, we conduct extensive experiments to support our claims and discuss LLMs{'} privacy implications.", }
With the rapid progress of large language models (LLMs), many downstream NLP tasks can be well solved given appropriate prompts. Though model developers and researchers work hard on dialog safety to avoid generating harmful content from LLMs, it is still challenging to steer AI-generated content (AIGC) for the human good. As powerful LLMs are devouring existing text data from various domains (e.g., GPT-3 is trained on 45TB texts), it is natural to doubt whether the private information is included in the training data and what privacy threats can these LLMs and their downstream applications bring. In this paper, we study the privacy threats from OpenAI{'}s ChatGPT and the New Bing enhanced by ChatGPT and show that application-integrated LLMs may cause new privacy threats. To this end, we conduct extensive experiments to support our claims and discuss LLMs{'} privacy implications.
[ "Li, Haoran", "Guo, Dadi", "Fan, Wei", "Xu, Mingshi", "Huang, Jie", "Meng, Fanpu", "Song, Yangqiu" ]
Multi-step Jailbreaking Privacy Attacks on ChatGPT
findings-emnlp.272
2304.05197
[ "https://github.com/hkust-knowcomp/llm-multistep-jailbreak" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.273.bib
https://aclanthology.org/2023.findings-emnlp.273/
@inproceedings{gatto-etal-2023-chain, title = "Chain-of-Thought Embeddings for Stance Detection on Social Media", author = "Gatto, Joseph and Sharif, Omar and Preum, Sarah", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.273", doi = "10.18653/v1/2023.findings-emnlp.273", pages = "4154--4161", abstract = "Stance detection on social media is challenging for Large Language Models (LLMs), as emerging slang and colloquial language in online conversations often contain deeply implicit stance labels. Chain-of-Thought (COT) prompting has recently been shown to improve performance on stance detection tasks {---} alleviating some of these issues. However, COT prompting still struggles with implicit stance identification. This challenge arises because many samples are initially challenging to comprehend before a model becomes familiar with the slang and evolving knowledge related to different topics, all of which need to be acquired through the training data. In this study, we address this problem by introducing COT Embeddings which improve COT performance on stance detection tasks by embedding COT reasonings and integrating them into a traditional RoBERTa-based stance detection pipeline. Our analysis demonstrates that 1) text encoders can leverage COT reasonings with minor errors or hallucinations that would otherwise distort the COT output label. 2) Text encoders can overlook misleading COT reasoning when a sample{'}s prediction heavily depends on domain-specific patterns. Our model achieves SOTA performance on multiple stance detection datasets collected from social media.", }
Stance detection on social media is challenging for Large Language Models (LLMs), as emerging slang and colloquial language in online conversations often contain deeply implicit stance labels. Chain-of-Thought (COT) prompting has recently been shown to improve performance on stance detection tasks {---} alleviating some of these issues. However, COT prompting still struggles with implicit stance identification. This challenge arises because many samples are initially challenging to comprehend before a model becomes familiar with the slang and evolving knowledge related to different topics, all of which need to be acquired through the training data. In this study, we address this problem by introducing COT Embeddings which improve COT performance on stance detection tasks by embedding COT reasonings and integrating them into a traditional RoBERTa-based stance detection pipeline. Our analysis demonstrates that 1) text encoders can leverage COT reasonings with minor errors or hallucinations that would otherwise distort the COT output label. 2) Text encoders can overlook misleading COT reasoning when a sample{'}s prediction heavily depends on domain-specific patterns. Our model achieves SOTA performance on multiple stance detection datasets collected from social media.
[ "Gatto, Joseph", "Sharif, Omar", "Preum, Sarah" ]
Chain-of-Thought Embeddings for Stance Detection on Social Media
findings-emnlp.273
2310.19750
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.274.bib
https://aclanthology.org/2023.findings-emnlp.274/
@inproceedings{nakshatri-etal-2023-using, title = "Using {LLM} for Improving Key Event Discovery: Temporal-Guided News Stream Clustering with Event Summaries", author = "Nakshatri, Nishanth and Liu, Siyi and Chen, Sihao and Roth, Dan and Goldwasser, Dan and Hopkins, Daniel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.274", doi = "10.18653/v1/2023.findings-emnlp.274", pages = "4162--4173", abstract = "Understanding and characterizing the discus- sions around key events in news streams is important for analyzing political discourse. In this work, we study the problem of identification of such key events and the news articles associated with those events from news streams. We propose a generic framework for news stream clustering that analyzes the temporal trend of news articles to automatically extract the underlying key news events that draw significant media attention. We characterize such key events by generating event summaries, based on which we form document clusters in an unsupervised fashion. We evaluate our simple yet effective framework, and show that it produces more coherent event-focused clusters. To demonstrate the utility of our approach, and facilitate future research along the line, we use our framework to construct KeyEvents, a dataset of 40k articles with 611 key events from 11 topics.", }
Understanding and characterizing the discus- sions around key events in news streams is important for analyzing political discourse. In this work, we study the problem of identification of such key events and the news articles associated with those events from news streams. We propose a generic framework for news stream clustering that analyzes the temporal trend of news articles to automatically extract the underlying key news events that draw significant media attention. We characterize such key events by generating event summaries, based on which we form document clusters in an unsupervised fashion. We evaluate our simple yet effective framework, and show that it produces more coherent event-focused clusters. To demonstrate the utility of our approach, and facilitate future research along the line, we use our framework to construct KeyEvents, a dataset of 40k articles with 611 key events from 11 topics.
[ "Nakshatri, Nishanth", "Liu, Siyi", "Chen, Sihao", "Roth, Dan", "Goldwasser, Dan", "Hopkins, Daniel" ]
Using LLM for Improving Key Event Discovery: Temporal-Guided News Stream Clustering with Event Summaries
findings-emnlp.274
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.275.bib
https://aclanthology.org/2023.findings-emnlp.275/
@inproceedings{liu-etal-2023-descriptive, title = "Descriptive Prompt Paraphrasing for Target-Oriented Multimodal Sentiment Classification", author = "Liu, Dan and Li, Lin and Tao, Xiaohui and Cui, Jian and Xie, Qing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.275", doi = "10.18653/v1/2023.findings-emnlp.275", pages = "4174--4186", abstract = "Target-Oriented Multimodal Sentiment Classification (TMSC) aims to perform sentiment polarity on a target jointly considering its corresponding multiple modalities including text, image, and others. Current researches mainly work on either of two types of targets in a decentralized manner. One type is entity, such as a person name, a location name, etc. and the other is aspect, such as {`}food{'}, {`}service{'}, etc. We believe that this target type based division in task modelling is not necessary because the sentiment polarity of the specific target is not governed by its type but its context. For this reason, we propose a unified model for target-oriented multimodal sentiment classification, so called UnifiedTMSC. It is prompt-based language modelling and performs well on four datasets spanning the above two target types. Specifically, we design descriptive prompt paraphrasing to reformulate TMSC task via (1) task paraphrasing, which obtains paraphrased prompts based on the task description through a paraphrasing rule, and (2) image prefix tuning, which optimizes a small continuous image vector throughout the multimodal representation space of text and images. Conducted on two entity-level multimodal datasets: Twitter-2015 and Twitter-2017, and two aspect-level multimodal datasets: Multi-ZOL and MASAD, the experimental results show the effectiveness of our UnifiedTMSC.", }
Target-Oriented Multimodal Sentiment Classification (TMSC) aims to perform sentiment polarity on a target jointly considering its corresponding multiple modalities including text, image, and others. Current researches mainly work on either of two types of targets in a decentralized manner. One type is entity, such as a person name, a location name, etc. and the other is aspect, such as {`}food{'}, {`}service{'}, etc. We believe that this target type based division in task modelling is not necessary because the sentiment polarity of the specific target is not governed by its type but its context. For this reason, we propose a unified model for target-oriented multimodal sentiment classification, so called UnifiedTMSC. It is prompt-based language modelling and performs well on four datasets spanning the above two target types. Specifically, we design descriptive prompt paraphrasing to reformulate TMSC task via (1) task paraphrasing, which obtains paraphrased prompts based on the task description through a paraphrasing rule, and (2) image prefix tuning, which optimizes a small continuous image vector throughout the multimodal representation space of text and images. Conducted on two entity-level multimodal datasets: Twitter-2015 and Twitter-2017, and two aspect-level multimodal datasets: Multi-ZOL and MASAD, the experimental results show the effectiveness of our UnifiedTMSC.
[ "Liu, Dan", "Li, Lin", "Tao, Xiaohui", "Cui, Jian", "Xie, Qing" ]
Descriptive Prompt Paraphrasing for Target-Oriented Multimodal Sentiment Classification
findings-emnlp.275
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.276.bib
https://aclanthology.org/2023.findings-emnlp.276/
@inproceedings{jin-etal-2023-joint, title = "Joint Semantic and Strategy Matching for Persuasive Dialogue", author = "Jin, Chuhao and Zhu, Yutao and Kong, Lingzhen and Li, Shijie and Zhang, Xiao and Song, Ruihua and Chen, Xu and Chen, Huan and Sun, Yuchong and Chen, Yu and Xu, Jun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.276", doi = "10.18653/v1/2023.findings-emnlp.276", pages = "4187--4197", abstract = "Persuasive dialogue aims to persuade users to achieve some targets by conversations. While previous persuasion models have achieved notable successes, they mostly base themselves on utterance semantic matching, and an important aspect has been ignored, that is, the strategy of the conversations, for example, the agent can choose an \textit{emotional-appeal} strategy to impress users. Compared with utterance semantics, conversation strategies are high-level concepts, which can be informative and provide complementary information to achieve effective persuasions. In this paper, we propose to build a persuasion model by jointly modeling the conversation semantics and strategies, where we design a BERT-like module and an auto-regressive predictor to match the semantics and strategies, respectively. Experimental results indicate that our proposed approach can significantly improve the state-of-the-art baseline by 5{\%} on a small dataset and 37{\%} on a large dataset in terms of Recall@1. Detailed analyses show that the auto-regressive predictor contributes most to the final performance.", }
Persuasive dialogue aims to persuade users to achieve some targets by conversations. While previous persuasion models have achieved notable successes, they mostly base themselves on utterance semantic matching, and an important aspect has been ignored, that is, the strategy of the conversations, for example, the agent can choose an \textit{emotional-appeal} strategy to impress users. Compared with utterance semantics, conversation strategies are high-level concepts, which can be informative and provide complementary information to achieve effective persuasions. In this paper, we propose to build a persuasion model by jointly modeling the conversation semantics and strategies, where we design a BERT-like module and an auto-regressive predictor to match the semantics and strategies, respectively. Experimental results indicate that our proposed approach can significantly improve the state-of-the-art baseline by 5{\%} on a small dataset and 37{\%} on a large dataset in terms of Recall@1. Detailed analyses show that the auto-regressive predictor contributes most to the final performance.
[ "Jin, Chuhao", "Zhu, Yutao", "Kong, Lingzhen", "Li, Shijie", "Zhang, Xiao", "Song, Ruihua", "Chen, Xu", "Chen, Huan", "Sun, Yuchong", "Chen, Yu", "Xu, Jun" ]
Joint Semantic and Strategy Matching for Persuasive Dialogue
findings-emnlp.276
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.277.bib
https://aclanthology.org/2023.findings-emnlp.277/
@inproceedings{bin-etal-2023-non-autoregressive, title = "Non-Autoregressive Sentence Ordering", author = "Bin, Yi and Shi, Wenhao and Ji, Bin and Zhang, Jipeng and Ding, Yujuan and Yang, Yang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.277", doi = "10.18653/v1/2023.findings-emnlp.277", pages = "4198--4214", abstract = "Existing sentence ordering approaches generally employ encoder-decoder frameworks with the pointer net to recover the coherence by recurrently predicting each sentence step-by-step. Such an autoregressive manner only leverages unilateral dependencies during decoding and cannot fully explore the semantic dependency between sentences for ordering. To overcome these limitations, in this paper, we propose a novel Non-Autoregressive Ordering Network, dubbed \textit{NAON}, which explores bilateral dependencies between sentences and predicts the sentence for each position in parallel. We claim that the non-autoregressive manner is not just applicable but also particularly suitable to the sentence ordering task because of two peculiar characteristics of the task: 1) each generation target is in deterministic length, and 2) the sentences and positions should match exclusively. Furthermore, to address the repetition issue of the naive non-autoregressive Transformer, we introduce an exclusive loss to constrain the exclusiveness between positions and sentences. To verify the effectiveness of the proposed model, we conduct extensive experiments on several common-used datasets and the experimental results show that our method outperforms all the autoregressive approaches and yields competitive performance compared with the state-of-the-arts. The codes are available at: \url{https://github.com/steven640pixel/nonautoregressive-sentence-ordering}.", }
Existing sentence ordering approaches generally employ encoder-decoder frameworks with the pointer net to recover the coherence by recurrently predicting each sentence step-by-step. Such an autoregressive manner only leverages unilateral dependencies during decoding and cannot fully explore the semantic dependency between sentences for ordering. To overcome these limitations, in this paper, we propose a novel Non-Autoregressive Ordering Network, dubbed \textit{NAON}, which explores bilateral dependencies between sentences and predicts the sentence for each position in parallel. We claim that the non-autoregressive manner is not just applicable but also particularly suitable to the sentence ordering task because of two peculiar characteristics of the task: 1) each generation target is in deterministic length, and 2) the sentences and positions should match exclusively. Furthermore, to address the repetition issue of the naive non-autoregressive Transformer, we introduce an exclusive loss to constrain the exclusiveness between positions and sentences. To verify the effectiveness of the proposed model, we conduct extensive experiments on several common-used datasets and the experimental results show that our method outperforms all the autoregressive approaches and yields competitive performance compared with the state-of-the-arts. The codes are available at: \url{https://github.com/steven640pixel/nonautoregressive-sentence-ordering}.
[ "Bin, Yi", "Shi, Wenhao", "Ji, Bin", "Zhang, Jipeng", "Ding, Yujuan", "Yang, Yang" ]
Non-Autoregressive Sentence Ordering
findings-emnlp.277
2310.12640
[ "https://github.com/steven640pixel/nonautoregressive-sentence-ordering" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.278.bib
https://aclanthology.org/2023.findings-emnlp.278/
@inproceedings{shen-etal-2023-large, title = "Large Language Models are Not Yet Human-Level Evaluators for Abstractive Summarization", author = "Shen, Chenhui and Cheng, Liying and Nguyen, Xuan-Phi and You, Yang and Bing, Lidong", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.278", doi = "10.18653/v1/2023.findings-emnlp.278", pages = "4215--4233", abstract = "With the recent undeniable advancement in reasoning abilities in large language models (LLMs) like ChatGPT and GPT-4, there is a growing trend for using LLMs on various tasks. One area where LLMs can be employed is as an alternative evaluation metric for complex generative tasks, which generally demands expensive human judges to complement the traditional automatic metrics for various evaluation dimensions such as fluency and consistency. In this work, we conduct extensive analysis to investigate the stability and reliability of LLMs as automatic evaluators for abstractive summarization. We found that while ChatGPT and GPT-4 outperform the commonly used automatic metrics, they are not ready as human replacements due to significant limitations. That is, LLM evaluators rate each candidate system inconsistently and are dimension-dependent. They also struggle to compare candidates with close performance and become more unreliable with higher-quality summaries by obtaining a lower correlation with humans. In other words, with better abstractive summarization systems being introduced at a fast pace, LLMs may result in misleading and unreliable evaluations.", }
With the recent undeniable advancement in reasoning abilities in large language models (LLMs) like ChatGPT and GPT-4, there is a growing trend for using LLMs on various tasks. One area where LLMs can be employed is as an alternative evaluation metric for complex generative tasks, which generally demands expensive human judges to complement the traditional automatic metrics for various evaluation dimensions such as fluency and consistency. In this work, we conduct extensive analysis to investigate the stability and reliability of LLMs as automatic evaluators for abstractive summarization. We found that while ChatGPT and GPT-4 outperform the commonly used automatic metrics, they are not ready as human replacements due to significant limitations. That is, LLM evaluators rate each candidate system inconsistently and are dimension-dependent. They also struggle to compare candidates with close performance and become more unreliable with higher-quality summaries by obtaining a lower correlation with humans. In other words, with better abstractive summarization systems being introduced at a fast pace, LLMs may result in misleading and unreliable evaluations.
[ "Shen, Chenhui", "Cheng, Liying", "Nguyen, Xuan-Phi", "You, Yang", "Bing, Lidong" ]
Large Language Models are Not Yet Human-Level Evaluators for Abstractive Summarization
findings-emnlp.278
2305.13091
[ "https://github.com/damo-nlp-sg/llm_summeval" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.279.bib
https://aclanthology.org/2023.findings-emnlp.279/
@inproceedings{sabir-padro-2023-women, title = "Women Wearing Lipstick: Measuring the Bias Between an Object and Its Related Gender", author = "Sabir, Ahmed and Padr{\'o}, Llu{\'\i}s", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.279", pages = "4234--4240", abstract = "In this paper, we investigate the impact of objects on gender bias in image captioning systems. Our results show that only gender-specific objects have a strong gender bias (e.g., women-lipstick). In addition, we propose a visual semantic-based gender score that measures the degree of bias and can be used as a plug-in for any image captioning system. Our experiments demonstrate the utility of the gender score, since we observe that our score can measure the bias relation between a caption and its related gender; therefore, our score can be used as an additional metric to the existing Object Gender Co-Occ approach.", }
In this paper, we investigate the impact of objects on gender bias in image captioning systems. Our results show that only gender-specific objects have a strong gender bias (e.g., women-lipstick). In addition, we propose a visual semantic-based gender score that measures the degree of bias and can be used as a plug-in for any image captioning system. Our experiments demonstrate the utility of the gender score, since we observe that our score can measure the bias relation between a caption and its related gender; therefore, our score can be used as an additional metric to the existing Object Gender Co-Occ approach.
[ "Sabir, Ahmed", "Padr{\\'o}, Llu{\\'\\i}s" ]
Women Wearing Lipstick: Measuring the Bias Between an Object and Its Related Gender
findings-emnlp.279
2310.19130
[ "https://github.com/ahmedssabir/genderscore" ]
https://huggingface.co/papers/2310.19130
1
0
0
2
[]
[]
[ "AhmedSSabir/Demo-for-Gender-Score", "AhmedSSabir/Demo-for-Gender-Score-jp", "AhmedSSabir/Demo-for-Gender-Score-AR" ]
1
Poster
https://aclanthology.org/2023.findings-emnlp.280.bib
https://aclanthology.org/2023.findings-emnlp.280/
@inproceedings{rennard-etal-2023-fredsum, title = "{FREDS}um: A Dialogue Summarization Corpus for {F}rench Political Debates", author = "Rennard, Virgile and Shang, Guokan and Grari, Damien and Hunter, Julie and Vazirgiannis, Michalis", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.280", doi = "10.18653/v1/2023.findings-emnlp.280", pages = "4241--4253", abstract = "Recent advances in deep learning, and especially the invention of encoder-decoder architectures, have significantly improved the performance of abstractive summarization systems. While the majority of research has focused on written documents, we have observed an increasing interest in the summarization of dialogues and multi-party conversations over the past few years. In this paper, we present a dataset of French political debates for the purpose of enhancing resources for multi-lingual dialogue summarization. Our dataset consists of manually transcribed and annotated political debates, covering a range of topics and perspectives. We highlight the importance of high-quality transcription and annotations for training accurate and effective dialogue summarization models, and emphasize the need for multilingual resources to support dialogue summarization in non-English languages. We also provide baseline experiments using state-of-the-art methods, and encourage further research in this area to advance the field of dialogue summarization. Our dataset will be made publicly available for use by the research community, enabling further advances in multilingual dialogue summarization.", }
Recent advances in deep learning, and especially the invention of encoder-decoder architectures, have significantly improved the performance of abstractive summarization systems. While the majority of research has focused on written documents, we have observed an increasing interest in the summarization of dialogues and multi-party conversations over the past few years. In this paper, we present a dataset of French political debates for the purpose of enhancing resources for multi-lingual dialogue summarization. Our dataset consists of manually transcribed and annotated political debates, covering a range of topics and perspectives. We highlight the importance of high-quality transcription and annotations for training accurate and effective dialogue summarization models, and emphasize the need for multilingual resources to support dialogue summarization in non-English languages. We also provide baseline experiments using state-of-the-art methods, and encourage further research in this area to advance the field of dialogue summarization. Our dataset will be made publicly available for use by the research community, enabling further advances in multilingual dialogue summarization.
[ "Rennard, Virgile", "Shang, Guokan", "Grari, Damien", "Hunter, Julie", "Vazirgiannis, Michalis" ]
FREDSum: A Dialogue Summarization Corpus for French Political Debates
findings-emnlp.280
2312.04843
[ "https://github.com/linto-ai/fredsum" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.281.bib
https://aclanthology.org/2023.findings-emnlp.281/
@inproceedings{wang-shang-2023-towards, title = "Towards Zero-shot Relation Extraction in Web Mining: A Multimodal Approach with Relative {XML} Path", author = "Wang, Zilong and Shang, Jingbo", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.281", doi = "10.18653/v1/2023.findings-emnlp.281", pages = "4254--4265", abstract = "The rapid growth of web pages and the increasing complexity of their structure poses a challenge for web mining models. Web mining models are required to understand semi-structured web pages, particularly when little is known about the subject or template of a new page. Current methods migrate language models to web mining by embedding the XML source code into the transformer or encoding the rendered layout with graph neural networks. However, these approaches do not take into account the relationships between text nodes within and across pages. In this paper, we propose a new approach, ReXMiner, for zero-shot relation extraction in web mining. ReXMiner encodes the shortest relative paths in the Document Object Model (DOM) tree of the web page which is a more accurate and efficient signal for key-value pair extraction within a web page. It also incorporates the popularity of each text node by counting the occurrence of the same text node across different web pages. We use contrastive learning to address the issue of sparsity in relation extraction. Extensive experiments on public benchmarks show that our method, ReXMiner, outperforms the state-of-the-art baselines in the task of zero-shot relation extraction in web mining.", }
The rapid growth of web pages and the increasing complexity of their structure poses a challenge for web mining models. Web mining models are required to understand semi-structured web pages, particularly when little is known about the subject or template of a new page. Current methods migrate language models to web mining by embedding the XML source code into the transformer or encoding the rendered layout with graph neural networks. However, these approaches do not take into account the relationships between text nodes within and across pages. In this paper, we propose a new approach, ReXMiner, for zero-shot relation extraction in web mining. ReXMiner encodes the shortest relative paths in the Document Object Model (DOM) tree of the web page which is a more accurate and efficient signal for key-value pair extraction within a web page. It also incorporates the popularity of each text node by counting the occurrence of the same text node across different web pages. We use contrastive learning to address the issue of sparsity in relation extraction. Extensive experiments on public benchmarks show that our method, ReXMiner, outperforms the state-of-the-art baselines in the task of zero-shot relation extraction in web mining.
[ "Wang, Zilong", "Shang, Jingbo" ]
Towards Zero-shot Relation Extraction in Web Mining: A Multimodal Approach with Relative XML Path
findings-emnlp.281
2305.13805
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.282.bib
https://aclanthology.org/2023.findings-emnlp.282/
@inproceedings{ganti-etal-2023-narrative, title = "Narrative Style and the Spread of Health Misinformation on {T}witter", author = "Ganti, Achyutarama and Hussein, Eslam Ali Hassan and Wilson, Steven and Ma, Zexin and Zhao, Xinyan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.282", doi = "10.18653/v1/2023.findings-emnlp.282", pages = "4266--4282", abstract = "Using a narrative style is an effective way to communicate health information both on and off social media. Given the amount of misinformation being spread online and its potential negative effects, it is crucial to investigate the interplay between narrative communication style and misinformative health content on user engagement on social media platforms. To explore this in the context of Twitter, we start with previously annotated health misinformation tweets (n $\approx$15,000) and annotate a subset of the data (n=3,000) for the presence of narrative style. We then use these manually assigned labels to train text classifiers, experimenting with supervised fine-tuning and in-context learning for automatic narrative detection. We use our best model to label remaining portion of the dataset, then statistically analyze the relationship between narrative style, misinformation, and user-level features on engagement, finding that narrative use is connected to increased tweet engagement and can, in some cases, lead to increased engagement with misinformation. Finally, we analyze the general categories of language used in narratives and health misinformation in our dataset.", }
Using a narrative style is an effective way to communicate health information both on and off social media. Given the amount of misinformation being spread online and its potential negative effects, it is crucial to investigate the interplay between narrative communication style and misinformative health content on user engagement on social media platforms. To explore this in the context of Twitter, we start with previously annotated health misinformation tweets (n $\approx$15,000) and annotate a subset of the data (n=3,000) for the presence of narrative style. We then use these manually assigned labels to train text classifiers, experimenting with supervised fine-tuning and in-context learning for automatic narrative detection. We use our best model to label remaining portion of the dataset, then statistically analyze the relationship between narrative style, misinformation, and user-level features on engagement, finding that narrative use is connected to increased tweet engagement and can, in some cases, lead to increased engagement with misinformation. Finally, we analyze the general categories of language used in narratives and health misinformation in our dataset.
[ "Ganti, Achyutarama", "Hussein, Eslam Ali Hassan", "Wilson, Steven", "Ma, Zexin", "Zhao, Xinyan" ]
Narrative Style and the Spread of Health Misinformation on Twitter
findings-emnlp.282
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.283.bib
https://aclanthology.org/2023.findings-emnlp.283/
@inproceedings{wang-etal-2023-hadskip, title = "{H}ad{S}kip: Homotopic and Adaptive Layer Skipping of Pre-trained Language Models for Efficient Inference", author = "Wang, Haoyu and Wang, Yaqing and Liu, Tianci and Zhao, Tuo and Gao, Jing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.283", doi = "10.18653/v1/2023.findings-emnlp.283", pages = "4283--4294", abstract = "Pre-trained language models (LMs) have brought remarkable performance on numerous NLP tasks. However, they require significant resources and entail high computational costs for inference, making them challenging to deploy in real-world and real-time systems. Existing early exiting methods aim to reduce computational complexity by selecting the layer at which to exit, but suffer from the limitation that they have to sequentially traverse through all layers prior to the selected exit layer, which lacks flexibility and degrades their performance. To solve this problem, we propose a \textbf{h}omotopic and \textbf{ad}aptive layer \textbf{skip}ping fine-tuning method named HadSkip. HadSkip adaptively selects the layers to skip based on a predefined budget. Specifically, we introduce a learnable gate before each layer of the LM to determine whether the current layer should be skipped. To tackle various challenges in training such as discrete gates and the budget constraint, we propose a fine-grained initialization strategy and homotopic optimization strategy. We conduct extensive experiments on the GLUE benchmark, and experimental results demonstrate the proposed HadSkip outperforms all state-of-the-art baselines significantly.", }
Pre-trained language models (LMs) have brought remarkable performance on numerous NLP tasks. However, they require significant resources and entail high computational costs for inference, making them challenging to deploy in real-world and real-time systems. Existing early exiting methods aim to reduce computational complexity by selecting the layer at which to exit, but suffer from the limitation that they have to sequentially traverse through all layers prior to the selected exit layer, which lacks flexibility and degrades their performance. To solve this problem, we propose a \textbf{h}omotopic and \textbf{ad}aptive layer \textbf{skip}ping fine-tuning method named HadSkip. HadSkip adaptively selects the layers to skip based on a predefined budget. Specifically, we introduce a learnable gate before each layer of the LM to determine whether the current layer should be skipped. To tackle various challenges in training such as discrete gates and the budget constraint, we propose a fine-grained initialization strategy and homotopic optimization strategy. We conduct extensive experiments on the GLUE benchmark, and experimental results demonstrate the proposed HadSkip outperforms all state-of-the-art baselines significantly.
[ "Wang, Haoyu", "Wang, Yaqing", "Liu, Tianci", "Zhao, Tuo", "Gao, Jing" ]
HadSkip: Homotopic and Adaptive Layer Skipping of Pre-trained Language Models for Efficient Inference
findings-emnlp.283
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.284.bib
https://aclanthology.org/2023.findings-emnlp.284/
@inproceedings{chen-etal-2023-empowering, title = "Empowering Psychotherapy with Large Language Models: Cognitive Distortion Detection through Diagnosis of Thought Prompting", author = "Chen, Zhiyu and Lu, Yujie and Wang, William", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.284", doi = "10.18653/v1/2023.findings-emnlp.284", pages = "4295--4304", abstract = "Mental illness remains one of the most critical public health issues of our time, due to the severe scarcity and accessibility limit of professionals. Psychotherapy requires high-level expertise to conduct deep, complex reasoning and analysis on the cognition modeling of the patients. In the era of Large Language Models, we believe it is the right time to develop AI assistance for computational psychotherapy. We study the task of cognitive distortion detection and propose the Diagnosis of Thought (DoT) prompting. DoT performs diagnosis on the patient{'}s speech via three stages: subjectivity assessment to separate the facts and the thoughts; contrastive reasoning to elicit the reasoning processes supporting and contradicting the thoughts; and schema analysis to summarize the cognition schemas. The generated diagnosis rationales through the three stages are essential for assisting the professionals. Experiments demonstrate that DoT obtains significant improvements over ChatGPT for cognitive distortion detection, while generating high-quality rationales approved by human experts.", }
Mental illness remains one of the most critical public health issues of our time, due to the severe scarcity and accessibility limit of professionals. Psychotherapy requires high-level expertise to conduct deep, complex reasoning and analysis on the cognition modeling of the patients. In the era of Large Language Models, we believe it is the right time to develop AI assistance for computational psychotherapy. We study the task of cognitive distortion detection and propose the Diagnosis of Thought (DoT) prompting. DoT performs diagnosis on the patient{'}s speech via three stages: subjectivity assessment to separate the facts and the thoughts; contrastive reasoning to elicit the reasoning processes supporting and contradicting the thoughts; and schema analysis to summarize the cognition schemas. The generated diagnosis rationales through the three stages are essential for assisting the professionals. Experiments demonstrate that DoT obtains significant improvements over ChatGPT for cognitive distortion detection, while generating high-quality rationales approved by human experts.
[ "Chen, Zhiyu", "Lu, Yujie", "Wang, William" ]
Empowering Psychotherapy with Large Language Models: Cognitive Distortion Detection through Diagnosis of Thought Prompting
findings-emnlp.284
2310.07146
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.285.bib
https://aclanthology.org/2023.findings-emnlp.285/
@inproceedings{kazemnejad-etal-2023-measuring, title = "Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models", author = "Kazemnejad, Amirhossein and Rezagholizadeh, Mehdi and Parthasarathi, Prasanna and Chandar, Sarath", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.285", doi = "10.18653/v1/2023.findings-emnlp.285", pages = "4305--4319", abstract = "While pre-trained language models (PLMs) have shown evidence of acquiring vast amounts of knowledge, it remains unclear how much of this parametric knowledge is actually usable in performing downstream tasks. We propose a systematic framework to measure parametric knowledge utilization in PLMs. Our framework first extracts knowledge from a PLM{'}s parameters and subsequently constructs a downstream task around this extracted knowledge. Performance on this task thus depends exclusively on utilizing the model{'}s possessed knowledge, avoiding confounding factors like insufficient signal. As an instantiation, we study factual knowledge of PLMs and measure utilization across 125M to 13B parameter PLMs. We observe that: (1) PLMs exhibit two gaps - in acquired vs. utilized knowledge, (2) they show limited robustness in utilizing knowledge under distribution shifts, and (3) larger models close the acquired knowledge gap but the utilized knowledge gap remains. Overall, our study provides insights into PLMs{'} capabilities beyond their acquired knowledge.", }
While pre-trained language models (PLMs) have shown evidence of acquiring vast amounts of knowledge, it remains unclear how much of this parametric knowledge is actually usable in performing downstream tasks. We propose a systematic framework to measure parametric knowledge utilization in PLMs. Our framework first extracts knowledge from a PLM{'}s parameters and subsequently constructs a downstream task around this extracted knowledge. Performance on this task thus depends exclusively on utilizing the model{'}s possessed knowledge, avoiding confounding factors like insufficient signal. As an instantiation, we study factual knowledge of PLMs and measure utilization across 125M to 13B parameter PLMs. We observe that: (1) PLMs exhibit two gaps - in acquired vs. utilized knowledge, (2) they show limited robustness in utilizing knowledge under distribution shifts, and (3) larger models close the acquired knowledge gap but the utilized knowledge gap remains. Overall, our study provides insights into PLMs{'} capabilities beyond their acquired knowledge.
[ "Kazemnejad, Amirhossein", "Rezagholizadeh, Mehdi", "Parthasarathi, Prasanna", "Ch", "ar, Sarath" ]
Measuring the Knowledge Acquisition-Utilization Gap in Pretrained Language Models
findings-emnlp.285
2305.14775
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.286.bib
https://aclanthology.org/2023.findings-emnlp.286/
@inproceedings{zhou-etal-2023-non-compositional, title = "Non-compositional Expression Generation Based on Curriculum Learning and Continual Learning", author = "Zhou, Jianing and Zeng, Ziheng and Gong, Hongyu and Bhat, Suma", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.286", doi = "10.18653/v1/2023.findings-emnlp.286", pages = "4320--4335", abstract = "Non-compositional expressions, by virtue of their non-compositionality, are a classic {`}pain in the neck{'} for NLP systems. Different from the general language modeling and generation tasks that are primarily compositional, generating non-compositional expressions is more challenging for current neural models, including large pre-trained language models. The main reasons are 1) their non-compositionality, and 2) the limited data resources. Therefore, to make the best use of available data for modeling non-compositionality, we propose a dynamic curriculum learning framework, which learns training examples from easy ones to harder ones thus optimizing the learning step by step but suffers from the forgetting problem. To alleviate the forgetting problem brought by the arrangement of training examples, we also apply a continual learning method into our curriculum learning framework. Our proposed method combined curriculum and continual learning, to gradually improve the model{'}s performance on the task of non-compositional expression generation. Experiments on idiomatic expression generation and metaphor generation affirm the effectiveness of our proposed curriculum learning framework and the application of continual learning. Our codes are available at https://github.com/zhjjn/CL2Gen.git.", }
Non-compositional expressions, by virtue of their non-compositionality, are a classic {`}pain in the neck{'} for NLP systems. Different from the general language modeling and generation tasks that are primarily compositional, generating non-compositional expressions is more challenging for current neural models, including large pre-trained language models. The main reasons are 1) their non-compositionality, and 2) the limited data resources. Therefore, to make the best use of available data for modeling non-compositionality, we propose a dynamic curriculum learning framework, which learns training examples from easy ones to harder ones thus optimizing the learning step by step but suffers from the forgetting problem. To alleviate the forgetting problem brought by the arrangement of training examples, we also apply a continual learning method into our curriculum learning framework. Our proposed method combined curriculum and continual learning, to gradually improve the model{'}s performance on the task of non-compositional expression generation. Experiments on idiomatic expression generation and metaphor generation affirm the effectiveness of our proposed curriculum learning framework and the application of continual learning. Our codes are available at https://github.com/zhjjn/CL2Gen.git.
[ "Zhou, Jianing", "Zeng, Ziheng", "Gong, Hongyu", "Bhat, Suma" ]
Non-compositional Expression Generation Based on Curriculum Learning and Continual Learning
findings-emnlp.286
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.287.bib
https://aclanthology.org/2023.findings-emnlp.287/
@inproceedings{kwak-etal-2023-information, title = "Information Extraction from Legal Wills: How Well Does {GPT}-4 Do?", author = "Kwak, Alice and Jeong, Cheonkam and Forte, Gaetano and Bambauer, Derek and Morrison, Clayton and Surdeanu, Mihai", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.287", doi = "10.18653/v1/2023.findings-emnlp.287", pages = "4336--4353", abstract = "This work presents a manually annotated dataset for Information Extraction (IE) from legal wills, and relevant in-context learning experiments on the dataset. The dataset consists of entities, binary relations between the entities (e.g., relations between testator and beneficiary), and n-ary events (e.g., bequest) extracted from 45 legal wills from two US states. This dataset can serve as a foundation for downstream tasks in the legal domain. Another use case of this dataset is evaluating the performance of large language models (LLMs) on this IE task. We evaluated GPT-4 with our dataset to investigate its ability to extract information from legal wills. Our evaluation result demonstrates that the model is capable of handling the task reasonably well. When given instructions and examples as a prompt, GPT-4 shows decent performance for both entity extraction and relation extraction tasks. Nevertheless, the evaluation result also reveals that the model is not perfect. We observed inconsistent outputs (given a prompt) as well as prompt over-generalization.", }
This work presents a manually annotated dataset for Information Extraction (IE) from legal wills, and relevant in-context learning experiments on the dataset. The dataset consists of entities, binary relations between the entities (e.g., relations between testator and beneficiary), and n-ary events (e.g., bequest) extracted from 45 legal wills from two US states. This dataset can serve as a foundation for downstream tasks in the legal domain. Another use case of this dataset is evaluating the performance of large language models (LLMs) on this IE task. We evaluated GPT-4 with our dataset to investigate its ability to extract information from legal wills. Our evaluation result demonstrates that the model is capable of handling the task reasonably well. When given instructions and examples as a prompt, GPT-4 shows decent performance for both entity extraction and relation extraction tasks. Nevertheless, the evaluation result also reveals that the model is not perfect. We observed inconsistent outputs (given a prompt) as well as prompt over-generalization.
[ "Kwak, Alice", "Jeong, Cheonkam", "Forte, Gaetano", "Bambauer, Derek", "Morrison, Clayton", "Surdeanu, Mihai" ]
Information Extraction from Legal Wills: How Well Does GPT-4 Do?
findings-emnlp.287
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.288.bib
https://aclanthology.org/2023.findings-emnlp.288/
@inproceedings{jumelet-zuidema-2023-transparency, title = "Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution", author = "Jumelet, Jaap and Zuidema, Willem", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.288", doi = "10.18653/v1/2023.findings-emnlp.288", pages = "4354--4369", abstract = "We present a setup for training, evaluating and interpreting neural language models, that uses artificial, language-like data. The data is generated using a massive probabilistic grammar (based on state-split PCFGs), that is itself derived from a large natural language corpus, but also provides us complete control over the generative process. We describe and release both grammar and corpus, and test for the naturalness of our generated data. This approach allows us define closed-form expressions to efficiently compute exact lower bounds on obtainable perplexity using both causal and masked language modelling. Our results show striking differences between neural language modelling architectures and training objectives in how closely they allow approximating the lower bound on perplexity. Our approach also allows us to directly compare learned representations to symbolic rules in the underlying source. We experiment with various techniques for interpreting model behaviour and learning dynamics. With access to the underlying true source, our results show striking differences and outcomes in learning dynamics between different classes of words.", }
We present a setup for training, evaluating and interpreting neural language models, that uses artificial, language-like data. The data is generated using a massive probabilistic grammar (based on state-split PCFGs), that is itself derived from a large natural language corpus, but also provides us complete control over the generative process. We describe and release both grammar and corpus, and test for the naturalness of our generated data. This approach allows us define closed-form expressions to efficiently compute exact lower bounds on obtainable perplexity using both causal and masked language modelling. Our results show striking differences between neural language modelling architectures and training objectives in how closely they allow approximating the lower bound on perplexity. Our approach also allows us to directly compare learned representations to symbolic rules in the underlying source. We experiment with various techniques for interpreting model behaviour and learning dynamics. With access to the underlying true source, our results show striking differences and outcomes in learning dynamics between different classes of words.
[ "Jumelet, Jaap", "Zuidema, Willem" ]
Transparency at the Source: Evaluating and Interpreting Language Models With Access to the True Distribution
findings-emnlp.288
2310.14840
[ "https://github.com/clclab/pcfg-lm" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.289.bib
https://aclanthology.org/2023.findings-emnlp.289/
@inproceedings{song-etal-2023-continual, title = "Continual Generalized Intent Discovery: Marching Towards Dynamic and Open-world Intent Recognition", author = "Song, Xiaoshuai and Mou, Yutao and He, Keqing and Qiu, Yueyan and Zhao, Jinxu and Wang, Pei and Xu, Weiran", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.289", doi = "10.18653/v1/2023.findings-emnlp.289", pages = "4370--4382", abstract = "In a practical dialogue system, users may input out-of-domain (OOD) queries. The Generalized Intent Discovery (GID) task aims to discover OOD intents from OOD queries and extend them to the in-domain (IND) classifier. However, GID only considers one stage of OOD learning, and needs to utilize the data in all previous stages for joint training, which limits its wide application in reality. In this paper, we introduce a new task, Continual Generalized Intent Discovery (CGID), which aims to continuously and automatically discover OOD intents from dynamic OOD data streams and then incrementally add them to the classifier with almost no previous data, thus moving towards dynamic intent recognition in an open world. Next, we propose a method called Prototype-guided Learning with Replay and Distillation (PLRD) for CGID, which bootstraps new intent discovery through class prototypes and balances new and old intents through data replay and feature distillation. Finally, we conduct detailed experiments and analysis to verify the effectiveness of PLRD and understand the key challenges of CGID for future research.", }
In a practical dialogue system, users may input out-of-domain (OOD) queries. The Generalized Intent Discovery (GID) task aims to discover OOD intents from OOD queries and extend them to the in-domain (IND) classifier. However, GID only considers one stage of OOD learning, and needs to utilize the data in all previous stages for joint training, which limits its wide application in reality. In this paper, we introduce a new task, Continual Generalized Intent Discovery (CGID), which aims to continuously and automatically discover OOD intents from dynamic OOD data streams and then incrementally add them to the classifier with almost no previous data, thus moving towards dynamic intent recognition in an open world. Next, we propose a method called Prototype-guided Learning with Replay and Distillation (PLRD) for CGID, which bootstraps new intent discovery through class prototypes and balances new and old intents through data replay and feature distillation. Finally, we conduct detailed experiments and analysis to verify the effectiveness of PLRD and understand the key challenges of CGID for future research.
[ "Song, Xiaoshuai", "Mou, Yutao", "He, Keqing", "Qiu, Yueyan", "Zhao, Jinxu", "Wang, Pei", "Xu, Weiran" ]
Continual Generalized Intent Discovery: Marching Towards Dynamic and Open-world Intent Recognition
findings-emnlp.289
2310.10184
[ "https://github.com/songxiaoshuai/CGID" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.290.bib
https://aclanthology.org/2023.findings-emnlp.290/
@inproceedings{santra-etal-2023-frugal, title = "Frugal Prompting for Dialog Models", author = "Santra, Bishal and Basak, Sakya and De, Abhinandan and Gupta, Manish and Goyal, Pawan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.290", doi = "10.18653/v1/2023.findings-emnlp.290", pages = "4383--4407", abstract = "The use of large language models (LLMs) in natural language processing (NLP) tasks is rapidly increasing, leading to changes in how researchers approach problems in the field. To fully utilize these models{'} abilities, a better understanding of their behavior for different input protocols is required. With LLMs, users can directly interact with the models through a text-based interface to define and solve various tasks. Hence, understanding the conversational abilities of these LLMs, which may not have been specifically trained for dialog modeling, is also important. This study examines different approaches for building dialog systems using LLMs by considering various aspects of the prompt. As part of prompt tuning, we experiment with various ways of providing instructions, exemplars, current query and additional context. The research also analyzes the representations of dialog history that have the optimal usable-information density. Based on the findings, the paper suggests more compact ways of providing dialog history information while ensuring good performance and reducing model{'}s inference-API costs. The research contributes to a better understanding of how LLMs can be effectively used for building interactive systems.", }
The use of large language models (LLMs) in natural language processing (NLP) tasks is rapidly increasing, leading to changes in how researchers approach problems in the field. To fully utilize these models{'} abilities, a better understanding of their behavior for different input protocols is required. With LLMs, users can directly interact with the models through a text-based interface to define and solve various tasks. Hence, understanding the conversational abilities of these LLMs, which may not have been specifically trained for dialog modeling, is also important. This study examines different approaches for building dialog systems using LLMs by considering various aspects of the prompt. As part of prompt tuning, we experiment with various ways of providing instructions, exemplars, current query and additional context. The research also analyzes the representations of dialog history that have the optimal usable-information density. Based on the findings, the paper suggests more compact ways of providing dialog history information while ensuring good performance and reducing model{'}s inference-API costs. The research contributes to a better understanding of how LLMs can be effectively used for building interactive systems.
[ "Santra, Bishal", "Basak, Sakya", "De, Abhin", "an", "Gupta, Manish", "Goyal, Pawan" ]
Frugal Prompting for Dialog Models
findings-emnlp.290
2305.14919
[ "https://github.com/bsantraigi/frugal-prompting" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.291.bib
https://aclanthology.org/2023.findings-emnlp.291/
@inproceedings{he-garner-2023-interpreter, title = "The Interpreter Understands Your Meaning: End-to-end Spoken Language Understanding Aided by Speech Translation", author = "He, Mutian and Garner, Philip", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.291", doi = "10.18653/v1/2023.findings-emnlp.291", pages = "4408--4423", abstract = "End-to-end spoken language understanding (SLU) remains elusive even with current large pretrained language models on text and speech, especially in multilingual cases. Machine translation has been established as a powerful pretraining objective on text as it enables the model to capture high-level semantics of the input utterance and associations between different languages, which is desired for speech models that work on lower-level acoustic frames. Motivated particularly by the task of cross-lingual SLU, we demonstrate that the task of speech translation (ST) is a good means of pretraining speech models for end-to-end SLU on both intra- and cross-lingual scenarios. By introducing ST, our models reach higher performance over baselines on monolingual and multilingual intent classification as well as spoken question answering using SLURP, MINDS-14, and NMSQA benchmarks. To verify the effectiveness of our methods, we also create new benchmark datasets from both synthetic and real sources, for speech summarization and low-resource/zero-shot transfer from English to French or Spanish. We further show the value of preserving knowledge for the ST pretraining task for better downstream performance, possibly using Bayesian transfer regularizers.", }
End-to-end spoken language understanding (SLU) remains elusive even with current large pretrained language models on text and speech, especially in multilingual cases. Machine translation has been established as a powerful pretraining objective on text as it enables the model to capture high-level semantics of the input utterance and associations between different languages, which is desired for speech models that work on lower-level acoustic frames. Motivated particularly by the task of cross-lingual SLU, we demonstrate that the task of speech translation (ST) is a good means of pretraining speech models for end-to-end SLU on both intra- and cross-lingual scenarios. By introducing ST, our models reach higher performance over baselines on monolingual and multilingual intent classification as well as spoken question answering using SLURP, MINDS-14, and NMSQA benchmarks. To verify the effectiveness of our methods, we also create new benchmark datasets from both synthetic and real sources, for speech summarization and low-resource/zero-shot transfer from English to French or Spanish. We further show the value of preserving knowledge for the ST pretraining task for better downstream performance, possibly using Bayesian transfer regularizers.
[ "He, Mutian", "Garner, Philip" ]
The Interpreter Understands Your Meaning: End-to-end Spoken Language Understanding Aided by Speech Translation
findings-emnlp.291
2305.09652
[ "https://github.com/idiap/translation-aided-slu" ]
https://huggingface.co/papers/2305.09652
0
0
0
2
[ "mutiann/translation-aided-slu" ]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.292.bib
https://aclanthology.org/2023.findings-emnlp.292/
@inproceedings{ding-etal-2023-maclasa, title = "{M}ac{L}a{S}a: Multi-Aspect Controllable Text Generation via Efficient Sampling from Compact Latent Space", author = "Ding, Hanxing and Pang, Liang and Wei, Zihao and Shen, Huawei and Cheng, Xueqi and Chua, Tat-Seng", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.292", doi = "10.18653/v1/2023.findings-emnlp.292", pages = "4424--4436", abstract = "Multi-aspect controllable text generation aims to generate fluent sentences that possess multiple desired attributes simultaneously. Traditional methods either require expensive iteration / searching within the discrete text space during the decoding stage, or train separate controllers for each aspect, resulting in a degradation of text quality due to the discrepancy between different aspects. To address these limitations, we introduce a novel approach for $\textbf{M}$ulti-$\textbf{a}$spect $\textbf{c}$ontrol, namely MacLaSa, that estimates compact $\textbf{La}$tent space for multiple aspects, and performs efficient $\textbf{Sa}$mpling with a fast sampler. To eliminate the domain discrepancies between different aspects, we first utilize a variational autoencoder (VAE) network to map text sequences from various data sources into close latent representations. The estimated latent space enables the formulation of joint energy-based models and the plugging in of arbitrary attribute discriminators to achieve multi-aspect control. Afterwards, we draw latent samples with a fast sampler based on ordinary differential equations and feed sampled examples to the VAE decoder to produce target text sequences. Experimental results demonstrate that MacLaSa outperforms strong baselines on both attribute relevance and textual quality while maintaining a high inference speed.", }
Multi-aspect controllable text generation aims to generate fluent sentences that possess multiple desired attributes simultaneously. Traditional methods either require expensive iteration / searching within the discrete text space during the decoding stage, or train separate controllers for each aspect, resulting in a degradation of text quality due to the discrepancy between different aspects. To address these limitations, we introduce a novel approach for $\textbf{M}$ulti-$\textbf{a}$spect $\textbf{c}$ontrol, namely MacLaSa, that estimates compact $\textbf{La}$tent space for multiple aspects, and performs efficient $\textbf{Sa}$mpling with a fast sampler. To eliminate the domain discrepancies between different aspects, we first utilize a variational autoencoder (VAE) network to map text sequences from various data sources into close latent representations. The estimated latent space enables the formulation of joint energy-based models and the plugging in of arbitrary attribute discriminators to achieve multi-aspect control. Afterwards, we draw latent samples with a fast sampler based on ordinary differential equations and feed sampled examples to the VAE decoder to produce target text sequences. Experimental results demonstrate that MacLaSa outperforms strong baselines on both attribute relevance and textual quality while maintaining a high inference speed.
[ "Ding, Hanxing", "Pang, Liang", "Wei, Zihao", "Shen, Huawei", "Cheng, Xueqi", "Chua, Tat-Seng" ]
MacLaSa: Multi-Aspect Controllable Text Generation via Efficient Sampling from Compact Latent Space
findings-emnlp.292
2305.12785
[ "https://github.com/trustedllm/maclasa" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.293.bib
https://aclanthology.org/2023.findings-emnlp.293/
@inproceedings{liu-etal-2023-hpe, title = "{HPE}: Answering Complex Questions over Text by Hybrid Question Parsing and Execution", author = "Liu, Ye and Yavuz, Semih and Meng, Rui and Radev, Dragomir and Xiong, Caiming and Joty, Shafiq and Zhou, Yingbo", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.293", doi = "10.18653/v1/2023.findings-emnlp.293", pages = "4437--4451", abstract = "The dominant paradigm of textual question answering systems is based on end-to-end neural networks, which excels at answering natural language questions but falls short on complex ones. This stands in contrast to the broad adaptation of semantic parsing approaches over structured data sources (e.g., relational database, knowledge graphs), that convert natural language questions to logical forms and execute them with query engines. Towards combining the strengths of neural and symbolic methods, we propose a framework of question parsing and execution on textual QA. It comprises two central pillars: (1) We parse the question of varying complexity into an intermediate representation, named H-expression, which is composed of simple questions as the primitives and symbolic operations representing the relationships among them; (2) To execute the resulting H-expressions, we design a hybrid executor, which integrates the deterministic rules to translate the symbolic operations with a drop-in neural reader network to answer each decomposed simple question. Hence, the proposed framework can be viewed as a top-down question parsing followed by a bottom-up answer backtracking. The resulting H-expressions closely guide the execution process, offering higher precision besides better interpretability while still preserving the advantages of the neural readers for resolving its primitive elements. Our extensive experiments on MuSiQue, 2WikiQA, HotpotQA, and NQ show that the proposed parsing and hybrid execution framework outperforms existing approaches in supervised, few-shot, and zero-shot settings, while also effectively exposing its underlying reasoning process.", }
The dominant paradigm of textual question answering systems is based on end-to-end neural networks, which excels at answering natural language questions but falls short on complex ones. This stands in contrast to the broad adaptation of semantic parsing approaches over structured data sources (e.g., relational database, knowledge graphs), that convert natural language questions to logical forms and execute them with query engines. Towards combining the strengths of neural and symbolic methods, we propose a framework of question parsing and execution on textual QA. It comprises two central pillars: (1) We parse the question of varying complexity into an intermediate representation, named H-expression, which is composed of simple questions as the primitives and symbolic operations representing the relationships among them; (2) To execute the resulting H-expressions, we design a hybrid executor, which integrates the deterministic rules to translate the symbolic operations with a drop-in neural reader network to answer each decomposed simple question. Hence, the proposed framework can be viewed as a top-down question parsing followed by a bottom-up answer backtracking. The resulting H-expressions closely guide the execution process, offering higher precision besides better interpretability while still preserving the advantages of the neural readers for resolving its primitive elements. Our extensive experiments on MuSiQue, 2WikiQA, HotpotQA, and NQ show that the proposed parsing and hybrid execution framework outperforms existing approaches in supervised, few-shot, and zero-shot settings, while also effectively exposing its underlying reasoning process.
[ "Liu, Ye", "Yavuz, Semih", "Meng, Rui", "Radev, Dragomir", "Xiong, Caiming", "Joty, Shafiq", "Zhou, Yingbo" ]
HPE: Answering Complex Questions over Text by Hybrid Question Parsing and Execution
findings-emnlp.293
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.294.bib
https://aclanthology.org/2023.findings-emnlp.294/
@inproceedings{liu-etal-2023-length, title = "Length-Adaptive Distillation: Customizing Small Language Model for Dynamic Token Pruning", author = "Liu, Chang and Tao, Chongyang and Liang, Jianxin and Feng, Jiazhan and Shen, Tao and Huang, Quzhe and Zhao, Dongyan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.294", doi = "10.18653/v1/2023.findings-emnlp.294", pages = "4452--4463", abstract = "Pre-trained language models greatly improve the performance of various tasks but at a cost of high computation overhead. To facilitate practical applications, there are mainly two lines of research to accelerate model inference: model compression and dynamic computation (e.g., dynamic token pruning). Existing works either adopt these methods individually or simply apply dynamic computation approaches upon a compressed small language model. We argue that they are sub-optimal since the two approaches are separately designed so the compressed model may not be tailored for dynamic computation. To tackle this problem and make compressed small language models faster, we propose Length-Adaptive Distillation, a two-stage knowledge distillation framework that aims to produce a customized small language model for dynamic token pruning. In the general distillation stage, we enforce the student to mimic and reconstruct the teacher{'}s output based on the dynamically pruned representations. Then in the task-specific distillation stage, the student is further accustomed to token pruning while absorbing the task-specific knowledge. Experimental results on GLUE benchmark demonstrate that our method can make the small language model more customized for dynamic token pruning and achieve better speed-performance trade-off.", }
Pre-trained language models greatly improve the performance of various tasks but at a cost of high computation overhead. To facilitate practical applications, there are mainly two lines of research to accelerate model inference: model compression and dynamic computation (e.g., dynamic token pruning). Existing works either adopt these methods individually or simply apply dynamic computation approaches upon a compressed small language model. We argue that they are sub-optimal since the two approaches are separately designed so the compressed model may not be tailored for dynamic computation. To tackle this problem and make compressed small language models faster, we propose Length-Adaptive Distillation, a two-stage knowledge distillation framework that aims to produce a customized small language model for dynamic token pruning. In the general distillation stage, we enforce the student to mimic and reconstruct the teacher{'}s output based on the dynamically pruned representations. Then in the task-specific distillation stage, the student is further accustomed to token pruning while absorbing the task-specific knowledge. Experimental results on GLUE benchmark demonstrate that our method can make the small language model more customized for dynamic token pruning and achieve better speed-performance trade-off.
[ "Liu, Chang", "Tao, Chongyang", "Liang, Jianxin", "Feng, Jiazhan", "Shen, Tao", "Huang, Quzhe", "Zhao, Dongyan" ]
Length-Adaptive Distillation: Customizing Small Language Model for Dynamic Token Pruning
findings-emnlp.294
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.295.bib
https://aclanthology.org/2023.findings-emnlp.295/
@inproceedings{upadhyaya-etal-2023-toxicity, title = "Toxicity, Morality, and Speech Act Guided Stance Detection", author = "Upadhyaya, Apoorva and Fisichella, Marco and Nejdl, Wolfgang", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.295", doi = "10.18653/v1/2023.findings-emnlp.295", pages = "4464--4478", abstract = "In this work, we focus on the task of determining the public attitude toward various social issues discussed on social media platforms. Platforms such as Twitter, however, are often used to spread misinformation, fake news through polarizing views. Existing literature suggests that higher levels of toxicity prevalent in Twitter conversations often spread negativity and delay addressing issues. Further, the embedded moral values and speech acts specifying the intention of the tweet correlate with public opinions expressed on various topics. However, previous works, which mainly focus on stance detection, either ignore the speech act, toxic, and moral features of these tweets that can collectively help capture public opinion or lack an efficient architecture that can detect the attitudes across targets. Therefore, in our work, we focus on the main task of stance detection by exploiting the toxicity, morality, and speech act as auxiliary tasks. We propose a multitasking model TWISTED that initially extracts the valence, arousal, and dominance aspects hidden in the tweets and injects the emotional sense into the embedded text followed by an efficient attention framework to correctly detect the tweet{'}s stance by using the shared features of toxicity, morality, and speech acts present in the tweet. Extensive experiments conducted on 4 benchmark stance detection datasets (SemEval-2016, P-Stance, COVID19-Stance, and ClimateChange) comprising different domains demonstrate the effectiveness and generalizability of our approach.", }
In this work, we focus on the task of determining the public attitude toward various social issues discussed on social media platforms. Platforms such as Twitter, however, are often used to spread misinformation, fake news through polarizing views. Existing literature suggests that higher levels of toxicity prevalent in Twitter conversations often spread negativity and delay addressing issues. Further, the embedded moral values and speech acts specifying the intention of the tweet correlate with public opinions expressed on various topics. However, previous works, which mainly focus on stance detection, either ignore the speech act, toxic, and moral features of these tweets that can collectively help capture public opinion or lack an efficient architecture that can detect the attitudes across targets. Therefore, in our work, we focus on the main task of stance detection by exploiting the toxicity, morality, and speech act as auxiliary tasks. We propose a multitasking model TWISTED that initially extracts the valence, arousal, and dominance aspects hidden in the tweets and injects the emotional sense into the embedded text followed by an efficient attention framework to correctly detect the tweet{'}s stance by using the shared features of toxicity, morality, and speech acts present in the tweet. Extensive experiments conducted on 4 benchmark stance detection datasets (SemEval-2016, P-Stance, COVID19-Stance, and ClimateChange) comprising different domains demonstrate the effectiveness and generalizability of our approach.
[ "Upadhyaya, Apoorva", "Fisichella, Marco", "Nejdl, Wolfgang" ]
Toxicity, Morality, and Speech Act Guided Stance Detection
findings-emnlp.295
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.296.bib
https://aclanthology.org/2023.findings-emnlp.296/
@inproceedings{schouten-etal-2023-reasoning, title = "Reasoning about Ambiguous Definite Descriptions", author = "Schouten, Stefan and Bloem, Peter and Markov, Ilia and Vossen, Piek", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.296", doi = "10.18653/v1/2023.findings-emnlp.296", pages = "4479--4484", abstract = "Natural language reasoning plays an increasingly important role in improving language models{'} ability to solve complex language understanding tasks. An interesting use case for reasoning is the resolution of context-dependent ambiguity. But no resources exist to evaluate how well Large Language Models can use explicit reasoning to resolve ambiguity in language. We propose to use ambiguous definite descriptions for this purpose and create and publish the first benchmark dataset consisting of such phrases. Our method includes all information required to resolve the ambiguity in the prompt, which means a model does not require anything but reasoning to do well. We find this to be a challenging task for recent LLMs. Code and data available at: https://github.com/sfschouten/exploiting-ambiguity", }
Natural language reasoning plays an increasingly important role in improving language models{'} ability to solve complex language understanding tasks. An interesting use case for reasoning is the resolution of context-dependent ambiguity. But no resources exist to evaluate how well Large Language Models can use explicit reasoning to resolve ambiguity in language. We propose to use ambiguous definite descriptions for this purpose and create and publish the first benchmark dataset consisting of such phrases. Our method includes all information required to resolve the ambiguity in the prompt, which means a model does not require anything but reasoning to do well. We find this to be a challenging task for recent LLMs. Code and data available at: https://github.com/sfschouten/exploiting-ambiguity
[ "Schouten, Stefan", "Bloem, Peter", "Markov, Ilia", "Vossen, Piek" ]
Reasoning about Ambiguous Definite Descriptions
findings-emnlp.296
2310.14657
[ "https://github.com/sfschouten/exploiting-ambiguity" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.297.bib
https://aclanthology.org/2023.findings-emnlp.297/
@inproceedings{canby-hockenmaier-2023-framework, title = "A Framework for Bidirectional Decoding: Case Study in Morphological Inflection", author = "Canby, Marc and Hockenmaier, Julia", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.297", doi = "10.18653/v1/2023.findings-emnlp.297", pages = "4485--4507", abstract = "Transformer-based encoder-decoder models that generate outputs in a left-to-right fashion have become standard for sequence-to-sequence tasks. In this paper, we propose a framework for decoding that produces sequences from the {``}outside-in{''}: at each step, the model chooses to generate a token on the left, on the right, or join the left and right sequences. We argue that this is more principled than prior bidirectional decoders. Our proposal supports a variety of model architectures and includes several training methods, such as a dynamic programming algorithm that marginalizes out the latent ordering variable. Our model sets state-of-the-art (SOTA) on the 2022 and 2023 shared tasks, beating the next best systems by over 4.7 and 2.7 points in average accuracy respectively. The model performs particularly well on long sequences, can implicitly learn the split point of words composed of stem and affix, and performs better relative to the baseline on datasets that have fewer unique lemmas.", }
Transformer-based encoder-decoder models that generate outputs in a left-to-right fashion have become standard for sequence-to-sequence tasks. In this paper, we propose a framework for decoding that produces sequences from the {``}outside-in{''}: at each step, the model chooses to generate a token on the left, on the right, or join the left and right sequences. We argue that this is more principled than prior bidirectional decoders. Our proposal supports a variety of model architectures and includes several training methods, such as a dynamic programming algorithm that marginalizes out the latent ordering variable. Our model sets state-of-the-art (SOTA) on the 2022 and 2023 shared tasks, beating the next best systems by over 4.7 and 2.7 points in average accuracy respectively. The model performs particularly well on long sequences, can implicitly learn the split point of words composed of stem and affix, and performs better relative to the baseline on datasets that have fewer unique lemmas.
[ "Canby, Marc", "Hockenmaier, Julia" ]
A Framework for Bidirectional Decoding: Case Study in Morphological Inflection
findings-emnlp.297
2305.12580
[ "https://github.com/marccanby/bidi_decoding" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.298.bib
https://aclanthology.org/2023.findings-emnlp.298/
@inproceedings{fu-etal-2023-text, title = "Text-guided 3{D} Human Generation from 2{D} Collections", author = "Fu, Tsu-Jui and Xiong, Wenhan and Nie, Yixin and Liu, Jingyu and Oguz, Barlas and Wang, William", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.298", doi = "10.18653/v1/2023.findings-emnlp.298", pages = "4508--4520", abstract = "3D human modeling has been widely used for engaging interaction in gaming, film, and animation. The customization of these characters is crucial for creativity and scalability, which highlights the importance of controllability. In this work, we introduce Text-guided 3D Human Generation (T3H), where a model is to generate a 3D human, guided by the fashion description. There are two goals: 1) the 3D human should render articulately, and 2) its outfit is controlled by the given text. To address this T3H task, we propose Compositional Cross-modal Human (CCH). CCH adopts cross-modal attention to fuse compositional human rendering with the extracted fashion semantics. Each human body part perceives relevant textual guidance as its visual patterns. We incorporate the human prior and semantic discrimination to enhance 3D geometry transformation and fine-grained consistency, enabling it to learn from 2D collections for data efficiency. We conduct evaluations on DeepFashion and SHHQ with diverse fashion attributes covering the shape, fabric, and color of upper and lower clothing. Extensive experiments demonstrate that CCH achieves superior results for T3H with high efficiency.", }
3D human modeling has been widely used for engaging interaction in gaming, film, and animation. The customization of these characters is crucial for creativity and scalability, which highlights the importance of controllability. In this work, we introduce Text-guided 3D Human Generation (T3H), where a model is to generate a 3D human, guided by the fashion description. There are two goals: 1) the 3D human should render articulately, and 2) its outfit is controlled by the given text. To address this T3H task, we propose Compositional Cross-modal Human (CCH). CCH adopts cross-modal attention to fuse compositional human rendering with the extracted fashion semantics. Each human body part perceives relevant textual guidance as its visual patterns. We incorporate the human prior and semantic discrimination to enhance 3D geometry transformation and fine-grained consistency, enabling it to learn from 2D collections for data efficiency. We conduct evaluations on DeepFashion and SHHQ with diverse fashion attributes covering the shape, fabric, and color of upper and lower clothing. Extensive experiments demonstrate that CCH achieves superior results for T3H with high efficiency.
[ "Fu, Tsu-Jui", "Xiong, Wenhan", "Nie, Yixin", "Liu, Jingyu", "Oguz, Barlas", "Wang, William" ]
Text-guided 3D Human Generation from 2D Collections
findings-emnlp.298
2305.14312
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.299.bib
https://aclanthology.org/2023.findings-emnlp.299/
@inproceedings{huang-zhu-2023-statistically, title = "Statistically Profiling Biases in Natural Language Reasoning Datasets and Models", author = "Huang, Shanshan and Zhu, Kenny", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.299", doi = "10.18653/v1/2023.findings-emnlp.299", pages = "4521--4530", abstract = "Recent studies have shown that many natural language understanding and reasoning datasets contain statistical cues that can be exploited by NLP models, resulting in an overestimation of their capabilities. Existing methods, such as {``}hypothesis-only{''} tests and CheckList, are limited in identifying these cues and evaluating model weaknesses. We introduce ICQ (I-See-Cue), a lightweight, general statistical profiling framework that automatically identifies potential biases in multiple-choice NLU datasets without requiring additional test cases. ICQ assesses the extent to which models exploit these biases through black-box testing, addressing the limitations of current methods. In this work, we conduct a comprehensive evaluation of statistical biases in 10 popular NLU datasets and 4 models, confirming prior findings, revealing new insights, and offering an online demonstration system to encourage users to assess their own datasets and models. Furthermore, we present a case study on investigating ChatGPT{'}s bias, providing valuable recommendations for practical applications.", }
Recent studies have shown that many natural language understanding and reasoning datasets contain statistical cues that can be exploited by NLP models, resulting in an overestimation of their capabilities. Existing methods, such as {``}hypothesis-only{''} tests and CheckList, are limited in identifying these cues and evaluating model weaknesses. We introduce ICQ (I-See-Cue), a lightweight, general statistical profiling framework that automatically identifies potential biases in multiple-choice NLU datasets without requiring additional test cases. ICQ assesses the extent to which models exploit these biases through black-box testing, addressing the limitations of current methods. In this work, we conduct a comprehensive evaluation of statistical biases in 10 popular NLU datasets and 4 models, confirming prior findings, revealing new insights, and offering an online demonstration system to encourage users to assess their own datasets and models. Furthermore, we present a case study on investigating ChatGPT{'}s bias, providing valuable recommendations for practical applications.
[ "Huang, Shanshan", "Zhu, Kenny" ]
Statistically Profiling Biases in Natural Language Reasoning Datasets and Models
findings-emnlp.299
2102.04632
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.300.bib
https://aclanthology.org/2023.findings-emnlp.300/
@inproceedings{hao-linzen-2023-verb, title = "Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number", author = "Hao, Sophie and Linzen, Tal", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.300", doi = "10.18653/v1/2023.findings-emnlp.300", pages = "4531--4539", abstract = "Deep architectures such as Transformers are sometimes criticized for having uninterpretable {``}black-box{''} representations. We use causal intervention analysis to show that, in fact, some linguistic features are represented in a linear, interpretable format. Specifically, we show that BERT{'}s ability to conjugate verbs relies on a linear encoding of subject number that can be manipulated with predictable effects on conjugation accuracy. This encoding is found in the subject position at the first layer and the verb position at the last layer, but distributed across positions at middle layers, particularly when there are multiple cues to subject number.", }
Deep architectures such as Transformers are sometimes criticized for having uninterpretable {``}black-box{''} representations. We use causal intervention analysis to show that, in fact, some linguistic features are represented in a linear, interpretable format. Specifically, we show that BERT{'}s ability to conjugate verbs relies on a linear encoding of subject number that can be manipulated with predictable effects on conjugation accuracy. This encoding is found in the subject position at the first layer and the verb position at the last layer, but distributed across positions at middle layers, particularly when there are multiple cues to subject number.
[ "Hao, Sophie", "Linzen, Tal" ]
Verb Conjugation in Transformers Is Determined by Linear Encodings of Subject Number
findings-emnlp.300
2310.15151
[ "https://github.com/yidinghao/causal-conjugation" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.301.bib
https://aclanthology.org/2023.findings-emnlp.301/
@inproceedings{murahari-etal-2023-mux-plms, title = "{MUX}-{PLM}s: Data Multiplexing for High-throughput Language Models", author = "Murahari, Vishvak and Deshpande, Ameet and Jimenez, Carlos and Shafran, Izhak and Wang, Mingqiu and Cao, Yuan and Narasimhan, Karthik", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.301", doi = "10.18653/v1/2023.findings-emnlp.301", pages = "4540--4554", abstract = "The widespread adoption of large language models such as ChatGPT and Bard has led to unprecedented demand for these technologies. The burgeoning cost of inference for ever-increasing model sizes coupled with hardware shortages has limited affordable access and poses a pressing need for efficiency approaches geared towards high throughput and performance. Multi-input multi-output (MIMO) algorithms such as data multiplexing, offer a promising solution with a many-fold increase in throughput by performing inference for multiple inputs at the cost of a single input. Yet these approaches are not currently performant enough to be deployed in modern systems. We change that by developing MUX-PLMs, a class of high throughput pre-trained language models (PLMs) trained with data multiplexing, that can be fine-tuned for any downstream task to yield high-throughput high-performance. Our novel multiplexing and demultiplexing modules proficiently entangle and disentangle inputs, and enable high-performance high throughput MUX-PLMs that are competitive with vanilla PLMs while achieving 2x/5x inference speedup with only a 1-4 {\%} drop on a broad suite of tasks.", }
The widespread adoption of large language models such as ChatGPT and Bard has led to unprecedented demand for these technologies. The burgeoning cost of inference for ever-increasing model sizes coupled with hardware shortages has limited affordable access and poses a pressing need for efficiency approaches geared towards high throughput and performance. Multi-input multi-output (MIMO) algorithms such as data multiplexing, offer a promising solution with a many-fold increase in throughput by performing inference for multiple inputs at the cost of a single input. Yet these approaches are not currently performant enough to be deployed in modern systems. We change that by developing MUX-PLMs, a class of high throughput pre-trained language models (PLMs) trained with data multiplexing, that can be fine-tuned for any downstream task to yield high-throughput high-performance. Our novel multiplexing and demultiplexing modules proficiently entangle and disentangle inputs, and enable high-performance high throughput MUX-PLMs that are competitive with vanilla PLMs while achieving 2x/5x inference speedup with only a 1-4 {\%} drop on a broad suite of tasks.
[ "Murahari, Vishvak", "Deshp", "e, Ameet", "Jimenez, Carlos", "Shafran, Izhak", "Wang, Mingqiu", "Cao, Yuan", "Narasimhan, Karthik" ]
MUX-PLMs: Data Multiplexing for High-throughput Language Models
findings-emnlp.301
2302.12441
[ "https://github.com/princeton-nlp/datamux-pretraining" ]
https://huggingface.co/papers/2302.12441
0
0
0
7
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.302.bib
https://aclanthology.org/2023.findings-emnlp.302/
@inproceedings{lee-etal-2023-last, title = "That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?", author = "Lee, Jaechan and Liu, Alisa and Ahia, Orevaoghene and Gonen, Hila and Smith, Noah", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.302", doi = "10.18653/v1/2023.findings-emnlp.302", pages = "4555--4569", abstract = "The translation of ambiguous text presents a challenge for translation systems, as it requires using the surrounding context to disambiguate the intended meaning as much as possible. While prior work has studied ambiguities that result from different grammatical features of the source and target language, we study semantic ambiguities that exist in the source (English in this work) itself. In particular, we focus on idioms that are open to both literal and figurative interpretations (e.g., goose egg), and collect TIDE, a dataset of 512 pairs of English sentences containing idioms with disambiguating context such that one is literal (it laid a goose egg) and another is figurative (they scored a goose egg, as in a score of zero). In experiments, we compare MT-specific models and language models for (i) their preference when given an ambiguous subsentence, (ii) their sensitivity to disambiguating context, and (iii) the performance disparity between figurative and literal source sentences. We find that current MT models consistently translate English idioms literally, even when the context suggests a figurative interpretation. On the other hand, LMs are far more context-aware, although there remain disparities across target languages. Our findings underline the potential of LMs as a strong backbone for context-aware translation.", }
The translation of ambiguous text presents a challenge for translation systems, as it requires using the surrounding context to disambiguate the intended meaning as much as possible. While prior work has studied ambiguities that result from different grammatical features of the source and target language, we study semantic ambiguities that exist in the source (English in this work) itself. In particular, we focus on idioms that are open to both literal and figurative interpretations (e.g., goose egg), and collect TIDE, a dataset of 512 pairs of English sentences containing idioms with disambiguating context such that one is literal (it laid a goose egg) and another is figurative (they scored a goose egg, as in a score of zero). In experiments, we compare MT-specific models and language models for (i) their preference when given an ambiguous subsentence, (ii) their sensitivity to disambiguating context, and (iii) the performance disparity between figurative and literal source sentences. We find that current MT models consistently translate English idioms literally, even when the context suggests a figurative interpretation. On the other hand, LMs are far more context-aware, although there remain disparities across target languages. Our findings underline the potential of LMs as a strong backbone for context-aware translation.
[ "Lee, Jaechan", "Liu, Alisa", "Ahia, Orevaoghene", "Gonen, Hila", "Smith, Noah" ]
That was the last straw, we need more: Are Translation Systems Sensitive to Disambiguating Context?
findings-emnlp.302
2310.14610
[ "https://github.com/jaechan-repo/mt-ambiguity" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.303.bib
https://aclanthology.org/2023.findings-emnlp.303/
@inproceedings{sileo-lernould-2023-mindgames, title = "{M}ind{G}ames: Targeting Theory of Mind in Large Language Models with Dynamic Epistemic Modal Logic", author = "Sileo, Damien and Lernould, Antoine", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.303", doi = "10.18653/v1/2023.findings-emnlp.303", pages = "4570--4577", abstract = "Theory of Mind (ToM) is a critical component of intelligence but its assessment remains the subject of heated debates. Prior research applied human ToM assessments to natural language processing models using either human-created standardized tests or rule-based templates. However, these methods primarily focus on simplistic reasoning and require further validation. Here, we leverage dynamic epistemic logic to isolate a particular component of ToM and to generate controlled problems. We also introduce new verbalization techniques to express these problems in English natural language. Our findings indicate that some language model scaling (from 70M to 6B and 350M to 174B) does not consistently yield results better than random chance. While GPT-4 demonstrates superior epistemic reasoning capabilities, there is still room for improvement. Our code and datasets are publicly available.", }
Theory of Mind (ToM) is a critical component of intelligence but its assessment remains the subject of heated debates. Prior research applied human ToM assessments to natural language processing models using either human-created standardized tests or rule-based templates. However, these methods primarily focus on simplistic reasoning and require further validation. Here, we leverage dynamic epistemic logic to isolate a particular component of ToM and to generate controlled problems. We also introduce new verbalization techniques to express these problems in English natural language. Our findings indicate that some language model scaling (from 70M to 6B and 350M to 174B) does not consistently yield results better than random chance. While GPT-4 demonstrates superior epistemic reasoning capabilities, there is still room for improvement. Our code and datasets are publicly available.
[ "Sileo, Damien", "Lernould, Antoine" ]
MindGames: Targeting Theory of Mind in Large Language Models with Dynamic Epistemic Modal Logic
findings-emnlp.303
2305.03353
[ "https://github.com/antoinelrnld/modlog" ]
https://huggingface.co/papers/2305.03353
1
0
0
2
[]
[ "sileod/mindgames" ]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.304.bib
https://aclanthology.org/2023.findings-emnlp.304/
@inproceedings{liu-etal-2023-latentlogic, title = "{LATENTLOGIC}: Learning Logic Rules in Latent Space over Knowledge Graphs", author = "Liu, Junnan and Mao, Qianren and Lin, Chenghua and Song, Yangqiu and Li, Jianxin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.304", doi = "10.18653/v1/2023.findings-emnlp.304", pages = "4578--4586", abstract = "Learning logic rules for knowledge graph reasoning is essential as such rules provide interpretable explanations for reasoning and can be generalized to different domains. However, existing methods often face challenges such as searching in a vast search space (e.g., enumeration of relational paths or multiplication of high-dimensional matrices) and inefficient optimization (e.g., techniques based on reinforcement learning or EM algorithm). To address these limitations, this paper proposes a novel framework called LatentLogic to efficiently mine logic rules by controllable generation in the latent space. Specifically, to map the discrete relational paths into the latent space, we leverage a pre-trained VAE and employ a discriminator to establish an energy-based distribution. Additionally, we incorporate a sampler based on ordinary differential equations, enabling the efficient generation of logic rules in our approach. Extensive experiments on benchmark datasets demonstrate the effectiveness and efficiency of our proposed method.", }
Learning logic rules for knowledge graph reasoning is essential as such rules provide interpretable explanations for reasoning and can be generalized to different domains. However, existing methods often face challenges such as searching in a vast search space (e.g., enumeration of relational paths or multiplication of high-dimensional matrices) and inefficient optimization (e.g., techniques based on reinforcement learning or EM algorithm). To address these limitations, this paper proposes a novel framework called LatentLogic to efficiently mine logic rules by controllable generation in the latent space. Specifically, to map the discrete relational paths into the latent space, we leverage a pre-trained VAE and employ a discriminator to establish an energy-based distribution. Additionally, we incorporate a sampler based on ordinary differential equations, enabling the efficient generation of logic rules in our approach. Extensive experiments on benchmark datasets demonstrate the effectiveness and efficiency of our proposed method.
[ "Liu, Junnan", "Mao, Qianren", "Lin, Chenghua", "Song, Yangqiu", "Li, Jianxin" ]
LATENTLOGIC: Learning Logic Rules in Latent Space over Knowledge Graphs
findings-emnlp.304
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.305.bib
https://aclanthology.org/2023.findings-emnlp.305/
@inproceedings{asl-etal-2023-robustembed, title = "{R}obust{E}mbed: Robust Sentence Embeddings Using Self-Supervised Contrastive Pre-Training", author = "Asl, Javad and Blanco, Eduardo and Takabi, Daniel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.305", doi = "10.18653/v1/2023.findings-emnlp.305", pages = "4587--4603", abstract = "Pre-trained language models (PLMs) have demonstrated their exceptional performance across a wide range of natural language processing tasks. The utilization of PLM-based sentence embeddings enables the generation of contextual representations that capture rich semantic information. However, despite their success with unseen samples, current PLM-based representations suffer from poor robustness in adversarial scenarios. In this paper, we propose RobustEmbed, a self-supervised sentence embedding framework that enhances both generalization and robustness in various text representation tasks and against diverse adversarial attacks. By generating high-risk adversarial perturbations to promote higher invariance in the embedding space and leveraging the perturbation within a novel contrastive objective approach, RobustEmbed effectively learns high-quality sentence embeddings. Our extensive experiments validate the superiority of RobustEmbed over previous state-of-the-art self-supervised representations in adversarial settings, while also showcasing relative improvements in seven semantic textual similarity (STS) tasks and six transfer tasks. Specifically, our framework achieves a significant reduction in attack success rate from 75.51{\%} to 39.62{\%} for the BERTAttack attack technique, along with enhancements of 1.20{\%} and 0.40{\%} in STS tasks and transfer tasks, respectively.", }
Pre-trained language models (PLMs) have demonstrated their exceptional performance across a wide range of natural language processing tasks. The utilization of PLM-based sentence embeddings enables the generation of contextual representations that capture rich semantic information. However, despite their success with unseen samples, current PLM-based representations suffer from poor robustness in adversarial scenarios. In this paper, we propose RobustEmbed, a self-supervised sentence embedding framework that enhances both generalization and robustness in various text representation tasks and against diverse adversarial attacks. By generating high-risk adversarial perturbations to promote higher invariance in the embedding space and leveraging the perturbation within a novel contrastive objective approach, RobustEmbed effectively learns high-quality sentence embeddings. Our extensive experiments validate the superiority of RobustEmbed over previous state-of-the-art self-supervised representations in adversarial settings, while also showcasing relative improvements in seven semantic textual similarity (STS) tasks and six transfer tasks. Specifically, our framework achieves a significant reduction in attack success rate from 75.51{\%} to 39.62{\%} for the BERTAttack attack technique, along with enhancements of 1.20{\%} and 0.40{\%} in STS tasks and transfer tasks, respectively.
[ "Asl, Javad", "Blanco, Eduardo", "Takabi, Daniel" ]
RobustEmbed: Robust Sentence Embeddings Using Self-Supervised Contrastive Pre-Training
findings-emnlp.305
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.306.bib
https://aclanthology.org/2023.findings-emnlp.306/
@inproceedings{fang-etal-2023-votes, title = "More than Votes? Voting and Language based Partisanship in the {US} {S}upreme {C}ourt", author = "Fang, Biaoyan and Cohn, Trevor and Baldwin, Timothy and Frermann, Lea", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.306", doi = "10.18653/v1/2023.findings-emnlp.306", pages = "4604--4614", abstract = "Understanding the prevalence and dynamics of justice partisanship and ideology in the US Supreme Court is critical in studying jurisdiction. Most research quantifies partisanship based on voting behavior, and oral arguments in the courtroom {---} the last essential procedure before the final case outcome {---} have not been well studied for this purpose. To address this gap, we present a framework for analyzing the language of justices in the courtroom for partisan signals, and study how partisanship in speech aligns with voting patterns. Our results show that the affiliated party of justices can be predicted reliably from their oral contributions. We further show a strong correlation between language partisanship and voting ideology.", }
Understanding the prevalence and dynamics of justice partisanship and ideology in the US Supreme Court is critical in studying jurisdiction. Most research quantifies partisanship based on voting behavior, and oral arguments in the courtroom {---} the last essential procedure before the final case outcome {---} have not been well studied for this purpose. To address this gap, we present a framework for analyzing the language of justices in the courtroom for partisan signals, and study how partisanship in speech aligns with voting patterns. Our results show that the affiliated party of justices can be predicted reliably from their oral contributions. We further show a strong correlation between language partisanship and voting ideology.
[ "Fang, Biaoyan", "Cohn, Trevor", "Baldwin, Timothy", "Frermann, Lea" ]
More than Votes? Voting and Language based Partisanship in the US Supreme Court
findings-emnlp.306
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.307.bib
https://aclanthology.org/2023.findings-emnlp.307/
@inproceedings{yue-etal-2023-automatic, title = "Automatic Evaluation of Attribution by Large Language Models", author = "Yue, Xiang and Wang, Boshi and Chen, Ziru and Zhang, Kai and Su, Yu and Sun, Huan", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.307", doi = "10.18653/v1/2023.findings-emnlp.307", pages = "4615--4635", abstract = "A recent focus of large language model (LLM) development, as exemplified by generative search engines, is to incorporate external references to generate and support its claims. However, evaluating the attribution, i.e., verifying whether the generated statement is fully supported by the cited reference, remains an open problem. Although human evaluation is common practice, it is costly and time-consuming. In this paper, we investigate automatic evaluation of attribution given by LLMs. We begin by defining different types of attribution errors, and then explore two approaches for automatic evaluation: prompting LLMs and fine-tuning smaller LMs. The fine-tuning data is repurposed from related tasks such as question answering, fact-checking, natural language inference, and summarization. We manually curate a set of test examples covering 12 domains from a generative search engine, New Bing. Our results on this curated test set and simulated examples from existing benchmarks highlight both promising signals and challenges. We hope our problem formulation, testbeds, and findings will help lay the foundation for future studies on this important problem.", }
A recent focus of large language model (LLM) development, as exemplified by generative search engines, is to incorporate external references to generate and support its claims. However, evaluating the attribution, i.e., verifying whether the generated statement is fully supported by the cited reference, remains an open problem. Although human evaluation is common practice, it is costly and time-consuming. In this paper, we investigate automatic evaluation of attribution given by LLMs. We begin by defining different types of attribution errors, and then explore two approaches for automatic evaluation: prompting LLMs and fine-tuning smaller LMs. The fine-tuning data is repurposed from related tasks such as question answering, fact-checking, natural language inference, and summarization. We manually curate a set of test examples covering 12 domains from a generative search engine, New Bing. Our results on this curated test set and simulated examples from existing benchmarks highlight both promising signals and challenges. We hope our problem formulation, testbeds, and findings will help lay the foundation for future studies on this important problem.
[ "Yue, Xiang", "Wang, Boshi", "Chen, Ziru", "Zhang, Kai", "Su, Yu", "Sun, Huan" ]
Automatic Evaluation of Attribution by Large Language Models
findings-emnlp.307
2305.06311
[ "https://github.com/osu-nlp-group/attrscore" ]
https://huggingface.co/papers/2305.06311
2
0
0
6
[]
[ "osunlp/AttrScore" ]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.308.bib
https://aclanthology.org/2023.findings-emnlp.308/
@inproceedings{sengupta-etal-2023-modeling, title = "Modeling Highlighting of Metaphors in Multitask Contrastive Learning Paradigms", author = "Sengupta, Meghdut and Alshomary, Milad and Scharlau, Ingrid and Wachsmuth, Henning", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.308", doi = "10.18653/v1/2023.findings-emnlp.308", pages = "4636--4659", abstract = "Metaphorical language, such as {``}spending time together{''}, projects meaning from a source domain (here, $\textit{money}$) to a target domain ($\textit{time}$). Thereby, it highlights certain aspects of the target domain, such as the $\textit{effort}$ behind the time investment. Highlighting aspects with metaphors (while hiding others) bridges the two domains and is the core of metaphorical meaning construction. For metaphor interpretation, linguistic theories stress that identifying the highlighted aspects is important for a better understanding of metaphors. However, metaphor research in NLP has not yet dealt with the phenomenon of highlighting. In this paper, we introduce the task of identifying the main aspect highlighted in a metaphorical sentence. Given the inherent interaction of source domains and highlighted aspects, we propose two multitask approaches - a joint learning approach and a continual learning approach - based on a finetuned contrastive learning model to jointly predict highlighted aspects and source domains. We further investigate whether (predicted) information about a source domain leads to better performance in predicting the highlighted aspects, and vice versa. Our experiments on an existing corpus suggest that, with the corresponding information, the performance to predict the other improves in terms of model accuracy in predicting highlighted aspects and source domains notably compared to the single-task baselines.", }
Metaphorical language, such as {``}spending time together{''}, projects meaning from a source domain (here, $\textit{money}$) to a target domain ($\textit{time}$). Thereby, it highlights certain aspects of the target domain, such as the $\textit{effort}$ behind the time investment. Highlighting aspects with metaphors (while hiding others) bridges the two domains and is the core of metaphorical meaning construction. For metaphor interpretation, linguistic theories stress that identifying the highlighted aspects is important for a better understanding of metaphors. However, metaphor research in NLP has not yet dealt with the phenomenon of highlighting. In this paper, we introduce the task of identifying the main aspect highlighted in a metaphorical sentence. Given the inherent interaction of source domains and highlighted aspects, we propose two multitask approaches - a joint learning approach and a continual learning approach - based on a finetuned contrastive learning model to jointly predict highlighted aspects and source domains. We further investigate whether (predicted) information about a source domain leads to better performance in predicting the highlighted aspects, and vice versa. Our experiments on an existing corpus suggest that, with the corresponding information, the performance to predict the other improves in terms of model accuracy in predicting highlighted aspects and source domains notably compared to the single-task baselines.
[ "Sengupta, Meghdut", "Alshomary, Milad", "Scharlau, Ingrid", "Wachsmuth, Henning" ]
Modeling Highlighting of Metaphors in Multitask Contrastive Learning Paradigms
findings-emnlp.308
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.309.bib
https://aclanthology.org/2023.findings-emnlp.309/
@inproceedings{wang-etal-2023-ldm2, title = "{LDM}$^2$: A Large Decision Model Imitating Human Cognition with Dynamic Memory Enhancement", author = "Wang, Xingjin and Li, Linjing and Zeng, Daniel", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.309", doi = "10.18653/v1/2023.findings-emnlp.309", pages = "4660--4681", abstract = "With the rapid development of large language models (LLMs), it is highly demanded that LLMs can be adopted to make decisions to enable the artificial general intelligence. Most approaches leverage manually crafted examples to prompt the LLMs to imitate the decision process of human. However, designing optimal prompts is difficult and the patterned prompts can hardly be generalized to more complex environments. In this paper, we propose a novel model named Large Decision Model with Memory (LDM$^2$), which leverages a dynamic memory mechanism to construct dynamic prompts, guiding the LLMs in making proper decisions according to the faced state. LDM$^2$ consists of two stages: memory formation and memory refinement. In the former stage, human behaviors are decomposed into state-action tuples utilizing the powerful summarizing ability of LLMs. Then, these tuples are stored in the memory, whose indices are generated by the LLMs, to facilitate the retrieval of the most relevant subset of memorized tuples based on the current state. In the latter stage, our LDM$^2$ employs tree exploration to discover more suitable decision processes and enrich the memory by adding valuable state-action tuples. The dynamic circle of exploration and memory enhancement provides LDM$^2$ a better understanding of the global environment. Extensive experiments conducted in two interactive environments have shown that our LDM$^2$ outperforms the baselines in terms of both score and success rate, which demonstrates its effectiveness.", }
With the rapid development of large language models (LLMs), it is highly demanded that LLMs can be adopted to make decisions to enable the artificial general intelligence. Most approaches leverage manually crafted examples to prompt the LLMs to imitate the decision process of human. However, designing optimal prompts is difficult and the patterned prompts can hardly be generalized to more complex environments. In this paper, we propose a novel model named Large Decision Model with Memory (LDM$^2$), which leverages a dynamic memory mechanism to construct dynamic prompts, guiding the LLMs in making proper decisions according to the faced state. LDM$^2$ consists of two stages: memory formation and memory refinement. In the former stage, human behaviors are decomposed into state-action tuples utilizing the powerful summarizing ability of LLMs. Then, these tuples are stored in the memory, whose indices are generated by the LLMs, to facilitate the retrieval of the most relevant subset of memorized tuples based on the current state. In the latter stage, our LDM$^2$ employs tree exploration to discover more suitable decision processes and enrich the memory by adding valuable state-action tuples. The dynamic circle of exploration and memory enhancement provides LDM$^2$ a better understanding of the global environment. Extensive experiments conducted in two interactive environments have shown that our LDM$^2$ outperforms the baselines in terms of both score and success rate, which demonstrates its effectiveness.
[ "Wang, Xingjin", "Li, Linjing", "Zeng, Daniel" ]
LDM^2: A Large Decision Model Imitating Human Cognition with Dynamic Memory Enhancement
findings-emnlp.309
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.310.bib
https://aclanthology.org/2023.findings-emnlp.310/
@inproceedings{chen-etal-2023-zara, title = "{ZARA}: Improving Few-Shot Self-Rationalization for Small Language Models", author = "Chen, Wei-Lin and Yen, An-Zi and Wu, Cheng-Kuang and Huang, Hen-Hsen and Chen, Hsin-Hsi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.310", doi = "10.18653/v1/2023.findings-emnlp.310", pages = "4682--4693", abstract = "Language models (LMs) that jointly generate end-task answers as well as free-text rationales are known as self-rationalization models. Recent works demonstrate great performance gain for self-rationalization by few-shot prompting LMs with rationale-augmented exemplars. However, the ability to benefit from explanations only emerges with large-scale LMs, which have poor accessibility. In this work, we explore the less-studied setting of leveraging explanations for small LMs to improve few-shot self-rationalization. We first revisit the relationship between rationales and answers. Inspired by the implicit mental process of how human beings assess explanations, we present a novel approach, Zero-shot Augmentation of Rationale-Answer pairs (ZARA), to automatically construct pseudo-parallel data for self-training by reducing the problem of plausibility judgement to natural language inference. Experimental results show ZARA achieves SOTA performance on the FEB benchmark, for both the task accuracy and the explanation metric. In addition, we conduct human and quantitative evaluation validating ZARA{'}s ability to automatically identify plausible and accurate rationale-answer pairs.", }
Language models (LMs) that jointly generate end-task answers as well as free-text rationales are known as self-rationalization models. Recent works demonstrate great performance gain for self-rationalization by few-shot prompting LMs with rationale-augmented exemplars. However, the ability to benefit from explanations only emerges with large-scale LMs, which have poor accessibility. In this work, we explore the less-studied setting of leveraging explanations for small LMs to improve few-shot self-rationalization. We first revisit the relationship between rationales and answers. Inspired by the implicit mental process of how human beings assess explanations, we present a novel approach, Zero-shot Augmentation of Rationale-Answer pairs (ZARA), to automatically construct pseudo-parallel data for self-training by reducing the problem of plausibility judgement to natural language inference. Experimental results show ZARA achieves SOTA performance on the FEB benchmark, for both the task accuracy and the explanation metric. In addition, we conduct human and quantitative evaluation validating ZARA{'}s ability to automatically identify plausible and accurate rationale-answer pairs.
[ "Chen, Wei-Lin", "Yen, An-Zi", "Wu, Cheng-Kuang", "Huang, Hen-Hsen", "Chen, Hsin-Hsi" ]
ZARA: Improving Few-Shot Self-Rationalization for Small Language Models
findings-emnlp.310
2305.07355
[ "https://github.com/ntunlplab/zara" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.311.bib
https://aclanthology.org/2023.findings-emnlp.311/
@inproceedings{lin-etal-2023-toxicchat, title = "{T}oxic{C}hat: Unveiling Hidden Challenges of Toxicity Detection in Real-World User-{AI} Conversation", author = "Lin, Zi and Wang, Zihan and Tong, Yongqi and Wang, Yangkun and Guo, Yuxin and Wang, Yujia and Shang, Jingbo", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.311", doi = "10.18653/v1/2023.findings-emnlp.311", pages = "4694--4702", abstract = "Despite remarkable advances that large language models have achieved in chatbots nowadays, maintaining a non-toxic user-AI interactive environment has become increasingly critical nowadays. However, previous efforts in toxicity detection have been mostly based on benchmarks derived from social media contents, leaving the unique challenges inherent to real-world user-AI interactions insufficiently explored. In this work, we introduce ToxicChat, a novel benchmark constructed based on real user queries from an open-source chatbot. This benchmark contains the rich, nuanced phenomena that can be tricky for current toxicity detection models to identify, revealing a significant domain difference when compared to social media contents. Our systematic evaluation of models trained on existing toxicity datasets has shown their shortcomings when applied to this unique domain of ToxicChat. Our work illuminates the potentially overlooked challenges of toxicity detection in real-world user-AI conversations. In the future, ToxicChat can be a valuable resource to drive further advancements toward building a safe and healthy environment for user-AI interactions.", }
Despite remarkable advances that large language models have achieved in chatbots nowadays, maintaining a non-toxic user-AI interactive environment has become increasingly critical nowadays. However, previous efforts in toxicity detection have been mostly based on benchmarks derived from social media contents, leaving the unique challenges inherent to real-world user-AI interactions insufficiently explored. In this work, we introduce ToxicChat, a novel benchmark constructed based on real user queries from an open-source chatbot. This benchmark contains the rich, nuanced phenomena that can be tricky for current toxicity detection models to identify, revealing a significant domain difference when compared to social media contents. Our systematic evaluation of models trained on existing toxicity datasets has shown their shortcomings when applied to this unique domain of ToxicChat. Our work illuminates the potentially overlooked challenges of toxicity detection in real-world user-AI conversations. In the future, ToxicChat can be a valuable resource to drive further advancements toward building a safe and healthy environment for user-AI interactions.
[ "Lin, Zi", "Wang, Zihan", "Tong, Yongqi", "Wang, Yangkun", "Guo, Yuxin", "Wang, Yujia", "Shang, Jingbo" ]
ToxicChat: Unveiling Hidden Challenges of Toxicity Detection in Real-World User-AI Conversation
findings-emnlp.311
2310.17389
[ "" ]
https://huggingface.co/papers/2310.17389
0
0
0
7
[ "google/shieldgemma-2b", "google/shieldgemma-27b", "google/shieldgemma-9b", "lmsys/toxicchat-t5-large-v1.0", "QuantFactory/shieldgemma-2b-GGUF", "QuantFactory/shieldgemma-9b-GGUF", "LiteLLMs/shieldgemma-2b-GGUF", "LiteLLMs/shieldgemma-9b-GGUF" ]
[ "lmsys/toxic-chat", "d-llm/toxic-chat" ]
[ "coium/google-shieldgemma-2b" ]
1
Poster
https://aclanthology.org/2023.findings-emnlp.312.bib
https://aclanthology.org/2023.findings-emnlp.312/
@inproceedings{stahl-etal-2023-mind, title = "Mind the Gap: Automated Corpus Creation for Enthymeme Detection and Reconstruction in Learner Arguments", author = {Stahl, Maja and D{\"u}sterhus, Nick and Chen, Mei-Hua and Wachsmuth, Henning}, editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.312", doi = "10.18653/v1/2023.findings-emnlp.312", pages = "4703--4717", abstract = "Writing strong arguments can be challenging for learners. It requires to select and arrange multiple argumentative discourse units (ADUs) in a logical and coherent way as well as to decide which ADUs to leave implicit, so called enthymemes. However, when important ADUs are missing, readers might not be able to follow the reasoning or understand the argument{'}s main point. This paper introduces two new tasks for learner arguments: to identify gaps in arguments (enthymeme detection) and to fill such gaps (enthymeme reconstruction). Approaches to both tasks may help learners improve their argument quality. We study how corpora for these tasks can be created automatically by deleting ADUs from an argumentative text that are central to the argument and its quality, while maintaining the text{'}s naturalness. Based on the ICLEv3 corpus of argumentative learner essays, we create 40,089 argument instances for enthymeme detection and reconstruction. Through manual studies, we provide evidence that the proposed corpus creation process leads to the desired quality reduction, and results in arguments that are similarly natural to those written by learners. Finally, first baseline approaches to enthymeme detection and reconstruction demonstrate the corpus{'} usefulness.", }
Writing strong arguments can be challenging for learners. It requires to select and arrange multiple argumentative discourse units (ADUs) in a logical and coherent way as well as to decide which ADUs to leave implicit, so called enthymemes. However, when important ADUs are missing, readers might not be able to follow the reasoning or understand the argument{'}s main point. This paper introduces two new tasks for learner arguments: to identify gaps in arguments (enthymeme detection) and to fill such gaps (enthymeme reconstruction). Approaches to both tasks may help learners improve their argument quality. We study how corpora for these tasks can be created automatically by deleting ADUs from an argumentative text that are central to the argument and its quality, while maintaining the text{'}s naturalness. Based on the ICLEv3 corpus of argumentative learner essays, we create 40,089 argument instances for enthymeme detection and reconstruction. Through manual studies, we provide evidence that the proposed corpus creation process leads to the desired quality reduction, and results in arguments that are similarly natural to those written by learners. Finally, first baseline approaches to enthymeme detection and reconstruction demonstrate the corpus{'} usefulness.
[ "Stahl, Maja", "D{\\\"u}sterhus, Nick", "Chen, Mei-Hua", "Wachsmuth, Henning" ]
Mind the Gap: Automated Corpus Creation for Enthymeme Detection and Reconstruction in Learner Arguments
findings-emnlp.312
2310.18098
[ "https://github.com/webis-de/emnlp-23" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.313.bib
https://aclanthology.org/2023.findings-emnlp.313/
@inproceedings{yang-etal-2023-dior, title = "Dior-{CVAE}: Pre-trained Language Models and Diffusion Priors for Variational Dialog Generation", author = "Yang, Tianyu and Tran, Thy Thy and Gurevych, Iryna", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.313", doi = "10.18653/v1/2023.findings-emnlp.313", pages = "4718--4735", abstract = "Current variational dialog models have employed pre-trained language models (PLMs) to parameterize the likelihood and posterior distributions. However, the Gaussian assumption made on the prior distribution is incompatible with these distributions, thus restricting the diversity of generated responses. These models also suffer from posterior collapse, i.e., the decoder tends to ignore latent variables and directly access information captured in the encoder through the cross-attention mechanism. In this work, we propose Dior-CVAE, a hierarchical conditional variational autoencoder (CVAE) with diffusion priors to address these challenges. We employ a diffusion model to increase the complexity of the prior distribution and its compatibility with the distributions produced by a PLM. Also, we propose memory dropout to the cross-attention mechanism, which actively encourages the use of latent variables for response generation. Overall, experiments across two commonly used open-domain dialog datasets show that our method can generate more diverse responses without large-scale dialog pre-training. Code is available at https://github.com/UKPLab/dior-cvae.", }
Current variational dialog models have employed pre-trained language models (PLMs) to parameterize the likelihood and posterior distributions. However, the Gaussian assumption made on the prior distribution is incompatible with these distributions, thus restricting the diversity of generated responses. These models also suffer from posterior collapse, i.e., the decoder tends to ignore latent variables and directly access information captured in the encoder through the cross-attention mechanism. In this work, we propose Dior-CVAE, a hierarchical conditional variational autoencoder (CVAE) with diffusion priors to address these challenges. We employ a diffusion model to increase the complexity of the prior distribution and its compatibility with the distributions produced by a PLM. Also, we propose memory dropout to the cross-attention mechanism, which actively encourages the use of latent variables for response generation. Overall, experiments across two commonly used open-domain dialog datasets show that our method can generate more diverse responses without large-scale dialog pre-training. Code is available at https://github.com/UKPLab/dior-cvae.
[ "Yang, Tianyu", "Tran, Thy Thy", "Gurevych, Iryna" ]
Dior-CVAE: Pre-trained Language Models and Diffusion Priors for Variational Dialog Generation
findings-emnlp.313
2305.15025
[ "https://github.com/ukplab/dior-cvae" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.314.bib
https://aclanthology.org/2023.findings-emnlp.314/
@inproceedings{zhao-etal-2023-retrieving, title = "Retrieving Multimodal Information for Augmented Generation: A Survey", author = "Zhao, Ruochen and Chen, Hailin and Wang, Weishi and Jiao, Fangkai and Do, Xuan Long and Qin, Chengwei and Ding, Bosheng and Guo, Xiaobao and Li, Minzhi and Li, Xingxuan and Joty, Shafiq", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.314", doi = "10.18653/v1/2023.findings-emnlp.314", pages = "4736--4756", abstract = "As Large Language Models (LLMs) become popular, there emerged an important trend of using multimodality to augment the LLMs{'} generation ability, which enables LLMs to better interact with the world. However, there lacks a unified perception of at which stage and how to incorporate different modalities. In this survey, we review methods that assist and augment generative models by retrieving multimodal knowledge, whose formats range from images, codes, tables, graphs, to audio. Such methods offer a promising solution to important concerns such as factuality, reasoning, interpretability, and robustness. By providing an in-depth review, this survey is expected to provide scholars with a deeper understanding of the methods{'} applications and encourage them to adapt existing techniques to the fast-growing field of LLMs.", }
As Large Language Models (LLMs) become popular, there emerged an important trend of using multimodality to augment the LLMs{'} generation ability, which enables LLMs to better interact with the world. However, there lacks a unified perception of at which stage and how to incorporate different modalities. In this survey, we review methods that assist and augment generative models by retrieving multimodal knowledge, whose formats range from images, codes, tables, graphs, to audio. Such methods offer a promising solution to important concerns such as factuality, reasoning, interpretability, and robustness. By providing an in-depth review, this survey is expected to provide scholars with a deeper understanding of the methods{'} applications and encourage them to adapt existing techniques to the fast-growing field of LLMs.
[ "Zhao, Ruochen", "Chen, Hailin", "Wang, Weishi", "Jiao, Fangkai", "Do, Xuan Long", "Qin, Chengwei", "Ding, Bosheng", "Guo, Xiaobao", "Li, Minzhi", "Li, Xingxuan", "Joty, Shafiq" ]
Retrieving Multimodal Information for Augmented Generation: A Survey
findings-emnlp.314
2303.10868
[ "" ]
https://huggingface.co/papers/2303.10868
1
0
0
11
[]
[]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.315.bib
https://aclanthology.org/2023.findings-emnlp.315/
@inproceedings{hou-li-2023-improving, title = "Improving Contrastive Learning of Sentence Embeddings with Focal {I}nfo{NCE}", author = "Hou, Pengyue and Li, Xingyu", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.315", doi = "10.18653/v1/2023.findings-emnlp.315", pages = "4757--4762", abstract = "The recent success of SimCSE has greatly advanced state-of-the-art sentence representations. However, the original formulation of SimCSE does not fully exploit the potential of hard negative samples in contrastive learning. This study introduces an unsupervised contrastive learning framework that combines SimCSE with hard negative mining, aiming to enhance the quality of sentence embeddings. The proposed focal-InfoNCE function introduces self-paced modulation terms in the contrastive objective, downweighting the loss associated with easy negatives and encouraging the model focusing on hard negatives. Experimentation on various STS benchmarks shows that our method improves sentence embeddings in terms of Spearman{'}s correlation and representation alignment and uniformity.", }
The recent success of SimCSE has greatly advanced state-of-the-art sentence representations. However, the original formulation of SimCSE does not fully exploit the potential of hard negative samples in contrastive learning. This study introduces an unsupervised contrastive learning framework that combines SimCSE with hard negative mining, aiming to enhance the quality of sentence embeddings. The proposed focal-InfoNCE function introduces self-paced modulation terms in the contrastive objective, downweighting the loss associated with easy negatives and encouraging the model focusing on hard negatives. Experimentation on various STS benchmarks shows that our method improves sentence embeddings in terms of Spearman{'}s correlation and representation alignment and uniformity.
[ "Hou, Pengyue", "Li, Xingyu" ]
Improving Contrastive Learning of Sentence Embeddings with Focal InfoNCE
findings-emnlp.315
[ "https://github.com/puerrrr/focal-infonce" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.316.bib
https://aclanthology.org/2023.findings-emnlp.316/
@inproceedings{nguyen-etal-2023-vault, title = "The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation", author = "Nguyen, Dung and Nam, Le and Dau, Anh and Nguyen, Anh and Nghiem, Khanh and Guo, Jin and Bui, Nghi", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.316", doi = "10.18653/v1/2023.findings-emnlp.316", pages = "4763--4788", abstract = "We present The Vault, an open-source dataset of high quality code-text pairs in multiple programming languages for training large language models to understand and generate code. We propose methods for thoroughly extracting samples that use both rules and deep learning to ensure that they contain high-quality pairs of code and text, resulting in a dataset of 43 million high-quality code-text pairs. We thoroughly evaluated this dataset and discovered that when used to train common code language models (such as CodeT5, CodeBERT, and CodeGen), it outperforms the same models train on other datasets such as CodeSearchNet. These evaluations included common coding tasks such as code generation, code summarization, and code search. The Vault can be used by researchers and practitioners to train a wide range of big language models that understand code. Alternatively, researchers can use our data cleaning methods and scripts to improve their own datasets. We anticipate that using The Vault to train large language models will improve their ability to understand and generate code, propelling AI research and software development forward. We are releasing our source code and a framework to make it easier for others to replicate our results.", }
We present The Vault, an open-source dataset of high quality code-text pairs in multiple programming languages for training large language models to understand and generate code. We propose methods for thoroughly extracting samples that use both rules and deep learning to ensure that they contain high-quality pairs of code and text, resulting in a dataset of 43 million high-quality code-text pairs. We thoroughly evaluated this dataset and discovered that when used to train common code language models (such as CodeT5, CodeBERT, and CodeGen), it outperforms the same models train on other datasets such as CodeSearchNet. These evaluations included common coding tasks such as code generation, code summarization, and code search. The Vault can be used by researchers and practitioners to train a wide range of big language models that understand code. Alternatively, researchers can use our data cleaning methods and scripts to improve their own datasets. We anticipate that using The Vault to train large language models will improve their ability to understand and generate code, propelling AI research and software development forward. We are releasing our source code and a framework to make it easier for others to replicate our results.
[ "Nguyen, Dung", "Nam, Le", "Dau, Anh", "Nguyen, Anh", "Nghiem, Khanh", "Guo, Jin", "Bui, Nghi" ]
The Vault: A Comprehensive Multilingual Dataset for Advancing Code Understanding and Generation
findings-emnlp.316
2305.06156
[ "https://github.com/fsoft-ai4code/thevault" ]
https://huggingface.co/papers/2305.06156
1
1
0
7
[ "Fsoft-AIC/Codebert-docstring-inconsistency" ]
[ "Fsoft-AIC/the-vault-function", "Fsoft-AIC/the-vault-inline", "Fsoft-AIC/the-vault-class" ]
[ "namnh113/Code_Summarization", "nam194/Code_Summarization" ]
1
Poster
https://aclanthology.org/2023.findings-emnlp.317.bib
https://aclanthology.org/2023.findings-emnlp.317/
@inproceedings{lelkes-etal-2023-sdoh, title = "{SDOH}-{NLI}: a Dataset for Inferring Social Determinants of Health from Clinical Notes", author = "Lelkes, Adam and Loreaux, Eric and Schuster, Tal and Chen, Ming-Jun and Rajkomar, Alvin", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.317", doi = "10.18653/v1/2023.findings-emnlp.317", pages = "4789--4798", abstract = "Social and behavioral determinants of health (SDOH) play a significant role in shaping health outcomes, and extracting these determinants from clinical notes is a first step to help healthcare providers systematically identify opportunities to provide appropriate care and address disparities. Progress on using NLP methods for this task has been hindered by the lack of high-quality publicly available labeled data, largely due to the privacy and regulatory constraints on the use of real patients{'} information. This paper introduces a new dataset, SDOH-NLI, that is based on publicly available notes and which we release publicly. We formulate SDOH extraction as a natural language inference task, and provide binary textual entailment labels obtained from human raters for a cross product of a set of social history snippets as premises and SDOH factors as hypotheses. Our dataset differs from standard NLI benchmarks in that our premises and hypotheses are obtained independently. We evaluate both {``}off-the-shelf{''} entailment models as well as models fine-tuned on our data, and highlight the ways in which our dataset appears more challenging than commonly used NLI datasets.", }
Social and behavioral determinants of health (SDOH) play a significant role in shaping health outcomes, and extracting these determinants from clinical notes is a first step to help healthcare providers systematically identify opportunities to provide appropriate care and address disparities. Progress on using NLP methods for this task has been hindered by the lack of high-quality publicly available labeled data, largely due to the privacy and regulatory constraints on the use of real patients{'} information. This paper introduces a new dataset, SDOH-NLI, that is based on publicly available notes and which we release publicly. We formulate SDOH extraction as a natural language inference task, and provide binary textual entailment labels obtained from human raters for a cross product of a set of social history snippets as premises and SDOH factors as hypotheses. Our dataset differs from standard NLI benchmarks in that our premises and hypotheses are obtained independently. We evaluate both {``}off-the-shelf{''} entailment models as well as models fine-tuned on our data, and highlight the ways in which our dataset appears more challenging than commonly used NLI datasets.
[ "Lelkes, Adam", "Loreaux, Eric", "Schuster, Tal", "Chen, Ming-Jun", "Rajkomar, Alvin" ]
SDOH-NLI: a Dataset for Inferring Social Determinants of Health from Clinical Notes
findings-emnlp.317
2310.18431
[ "" ]
https://huggingface.co/papers/2310.18431
0
0
0
5
[]
[ "tasksource/SDOH-NLI", "davanstrien/SDOH-NLI" ]
[]
1
Poster
https://aclanthology.org/2023.findings-emnlp.318.bib
https://aclanthology.org/2023.findings-emnlp.318/
@inproceedings{pu-etal-2023-zero, title = "On the Zero-Shot Generalization of Machine-Generated Text Detectors", author = "Pu, Xiao and Zhang, Jingyu and Han, Xiaochuang and Tsvetkov, Yulia and He, Tianxing", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.318", doi = "10.18653/v1/2023.findings-emnlp.318", pages = "4799--4808", abstract = "The rampant proliferation of large language models, fluent enough to generate text indistinguishable from human-written language, gives unprecedented importance to the detection of machine-generated text. This work is motivated by an important research question: How will the detectors of machine-generated text perform on outputs of a new generator, that the detectors were not trained on? We begin by collecting generation data from a wide range of LLMs, and train neural detectors on data from each generator and test its performance on held-out generators. While none of the detectors can generalize to all generators, we observe a consistent and interesting pattern that the detectors trained on data from a medium-size LLM can zero-shot generalize to the larger version. As a concrete application, we demonstrate that robust detectors can be built on an ensemble of training data from medium-sized models.", }
The rampant proliferation of large language models, fluent enough to generate text indistinguishable from human-written language, gives unprecedented importance to the detection of machine-generated text. This work is motivated by an important research question: How will the detectors of machine-generated text perform on outputs of a new generator, that the detectors were not trained on? We begin by collecting generation data from a wide range of LLMs, and train neural detectors on data from each generator and test its performance on held-out generators. While none of the detectors can generalize to all generators, we observe a consistent and interesting pattern that the detectors trained on data from a medium-size LLM can zero-shot generalize to the larger version. As a concrete application, we demonstrate that robust detectors can be built on an ensemble of training data from medium-sized models.
[ "Pu, Xiao", "Zhang, Jingyu", "Han, Xiaochuang", "Tsvetkov, Yulia", "He, Tianxing" ]
On the Zero-Shot Generalization of Machine-Generated Text Detectors
findings-emnlp.318
2310.05165
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster
https://aclanthology.org/2023.findings-emnlp.319.bib
https://aclanthology.org/2023.findings-emnlp.319/
@inproceedings{hao-etal-2023-complex, title = "Complex Event Schema Induction with Knowledge-Enriched Diffusion Model", author = "Hao, Yupu and Cao, Pengfei and Chen, Yubo and Liu, Kang and Xu, Jiexin and Li, Huaijun and Jiang, Xiaojian and Zhao, Jun", editor = "Bouamor, Houda and Pino, Juan and Bali, Kalika", booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023", month = dec, year = "2023", address = "Singapore", publisher = "Association for Computational Linguistics", url = "https://aclanthology.org/2023.findings-emnlp.319", doi = "10.18653/v1/2023.findings-emnlp.319", pages = "4809--4825", abstract = "The concept of a complex event schema pertains to the graph structure that represents real-world knowledge of events and their multi-dimensional relationships. However, previous studies on event schema induction have been hindered by challenges such as error propagation and data quality issues. To tackle these challenges, we propose a knowledge-enriched discrete diffusion model. Specifically, we distill the abundant event scenario knowledge of Large Language Models (LLMs) through an object-oriented Python style prompt. We incorporate this knowledge into the training data, enhancing its quality. Subsequently, we employ a discrete diffusion process to generate all nodes and links simultaneously in a non-auto-regressive manner to tackle the problem of error propagation. Additionally, we devise an entity relationship prediction module to complete entity relationships between event arguments. Experimental results demonstrate that our approach achieves outstanding performance across a range of evaluation metrics.", }
The concept of a complex event schema pertains to the graph structure that represents real-world knowledge of events and their multi-dimensional relationships. However, previous studies on event schema induction have been hindered by challenges such as error propagation and data quality issues. To tackle these challenges, we propose a knowledge-enriched discrete diffusion model. Specifically, we distill the abundant event scenario knowledge of Large Language Models (LLMs) through an object-oriented Python style prompt. We incorporate this knowledge into the training data, enhancing its quality. Subsequently, we employ a discrete diffusion process to generate all nodes and links simultaneously in a non-auto-regressive manner to tackle the problem of error propagation. Additionally, we devise an entity relationship prediction module to complete entity relationships between event arguments. Experimental results demonstrate that our approach achieves outstanding performance across a range of evaluation metrics.
[ "Hao, Yupu", "Cao, Pengfei", "Chen, Yubo", "Liu, Kang", "Xu, Jiexin", "Li, Huaijun", "Jiang, Xiaojian", "Zhao, Jun" ]
Complex Event Schema Induction with Knowledge-Enriched Diffusion Model
findings-emnlp.319
[ "" ]
-1
-1
-1
-1
[]
[]
[]
0
Poster