bibtex_url
stringlengths 41
53
| proceedings
stringlengths 38
50
| bibtext
stringlengths 528
3.02k
| abstract
stringlengths 17
2.35k
| authors
sequencelengths 1
44
| title
stringlengths 18
190
| id
stringlengths 7
19
| arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 528
values | n_linked_authors
int64 -1
15
| upvotes
int64 -1
77
| num_comments
int64 -1
10
| n_authors
int64 -1
52
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
15
| Spaces
sequencelengths 0
46
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.emnlp-main.1001.bib | https://aclanthology.org/2023.emnlp-main.1001/ | @inproceedings{zhao-etal-2023-hop,
title = "Hop, Union, Generate: Explainable Multi-hop Reasoning without Rationale Supervision",
author = "Zhao, Wenting and
Chiu, Justin and
Cardie, Claire and
Rush, Alexander",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1001",
doi = "10.18653/v1/2023.emnlp-main.1001",
pages = "16119--16130",
abstract = "Explainable multi-hop question answering (QA) not only predicts answers but also identifies rationales, i. e. subsets of input sentences used to derive the answers. Existing methods rely on supervision for both answers and rationales. This problem has been extensively studied under the supervised setting, where both answer and rationale annotations are given. Because rationale annotations are expensive to collect and not always available, recent efforts have been devoted to developing methods that do not rely on supervision for rationales. However, such methods have limited capacities in modeling interactions between sentences, let alone reasoning across multiple documents. This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision. Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document. Experimental results show that our approach is more accurate at selecting rationales than the previous methods, while maintaining similar accuracy in predicting answers.",
}
| Explainable multi-hop question answering (QA) not only predicts answers but also identifies rationales, i. e. subsets of input sentences used to derive the answers. Existing methods rely on supervision for both answers and rationales. This problem has been extensively studied under the supervised setting, where both answer and rationale annotations are given. Because rationale annotations are expensive to collect and not always available, recent efforts have been devoted to developing methods that do not rely on supervision for rationales. However, such methods have limited capacities in modeling interactions between sentences, let alone reasoning across multiple documents. This work proposes a principled, probabilistic approach for training explainable multi-hop QA systems without rationale supervision. Our approach performs multi-hop reasoning by explicitly modeling rationales as sets, enabling the model to capture interactions between documents and sentences within a document. Experimental results show that our approach is more accurate at selecting rationales than the previous methods, while maintaining similar accuracy in predicting answers. | [
"Zhao, Wenting",
"Chiu, Justin",
"Cardie, Claire",
"Rush, Alex",
"er"
] | Hop, Union, Generate: Explainable Multi-hop Reasoning without Rationale Supervision | emnlp-main.1001 | 2305.14237 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1002.bib | https://aclanthology.org/2023.emnlp-main.1002/ | @inproceedings{jenkins-etal-2023-split,
title = "To Split or Not to Split: Composing Compounds in Contextual Vector Spaces",
author = "Jenkins, Chris and
Miletic, Filip and
Schulte im Walde, Sabine",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1002",
doi = "10.18653/v1/2023.emnlp-main.1002",
pages = "16131--16136",
abstract = "We investigate the effect of sub-word tokenization on representations of German noun compounds: single orthographic words which are composed of two or more constituents but often tokenized into units that are not morphologically motivated or meaningful. Using variants of BERT models and tokenization strategies on domain-specific restricted diachronic data, we introduce a suite of evaluations relying on the masked language modelling task and compositionality prediction. We obtain the most consistent improvements by pre-splitting compounds into constituents.",
}
| We investigate the effect of sub-word tokenization on representations of German noun compounds: single orthographic words which are composed of two or more constituents but often tokenized into units that are not morphologically motivated or meaningful. Using variants of BERT models and tokenization strategies on domain-specific restricted diachronic data, we introduce a suite of evaluations relying on the masked language modelling task and compositionality prediction. We obtain the most consistent improvements by pre-splitting compounds into constituents. | [
"Jenkins, Chris",
"Miletic, Filip",
"Schulte im Walde, Sabine"
] | To Split or Not to Split: Composing Compounds in Contextual Vector Spaces | emnlp-main.1002 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1003.bib | https://aclanthology.org/2023.emnlp-main.1003/ | @inproceedings{gemmell-dalton-2023-toolwriter,
title = "{T}ool{W}riter: Question Specific Tool Synthesis for Tabular Data",
author = "Gemmell, Carlos and
Dalton, Jeff",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1003",
doi = "10.18653/v1/2023.emnlp-main.1003",
pages = "16137--16148",
abstract = "Tabular question answering (TQA) presents a challenging setting for neural systems by requiring joint reasoning of natural language with large amounts of semi-structured data. Unlike humans who use programmatic tools like filters to transform data before processing, language models in TQA process tables directly, resulting in information loss as table size increases. In this paper we propose ToolWriter to generate query specific programs and detect when to apply them to transform tables and align them with the TQA model{'}s capabilities. Focusing Toolwriter to generate row-filtering tools improves the state-of-the-art for WikiTableQuestions and WikiSQL with the most performance gained on long tables. By investigating headroom, our work highlights the broader potential for programmatic tools combined with neural components to manipulate large amounts of structured data.",
}
| Tabular question answering (TQA) presents a challenging setting for neural systems by requiring joint reasoning of natural language with large amounts of semi-structured data. Unlike humans who use programmatic tools like filters to transform data before processing, language models in TQA process tables directly, resulting in information loss as table size increases. In this paper we propose ToolWriter to generate query specific programs and detect when to apply them to transform tables and align them with the TQA model{'}s capabilities. Focusing Toolwriter to generate row-filtering tools improves the state-of-the-art for WikiTableQuestions and WikiSQL with the most performance gained on long tables. By investigating headroom, our work highlights the broader potential for programmatic tools combined with neural components to manipulate large amounts of structured data. | [
"Gemmell, Carlos",
"Dalton, Jeff"
] | ToolWriter: Question Specific Tool Synthesis for Tabular Data | emnlp-main.1003 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1004.bib | https://aclanthology.org/2023.emnlp-main.1004/ | @inproceedings{tian-etal-2023-interactive,
title = "Interactive Text-to-{SQL} Generation via Editable Step-by-Step Explanations",
author = "Tian, Yuan and
Zhang, Zheng and
Ning, Zheng and
Li, Toby and
Kummerfeld, Jonathan K. and
Zhang, Tianyi",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1004",
doi = "10.18653/v1/2023.emnlp-main.1004",
pages = "16149--16166",
abstract = "Relational databases play an important role in business, science, and more. However, many users cannot fully unleash the analytical power of relational databases, because they are not familiar with database languages such as SQL. Many techniques have been proposed to automatically generate SQL from natural language, but they suffer from two issues: (1) they still make many mistakes, particularly for complex queries, and (2) they do not provide a flexible way for non-expert users to validate and refine incorrect queries. To address these issues, we introduce a new interaction mechanism that allows users to directly edit a step-by-step explanation of a query to fix errors. Our experiments on multiple datasets, as well as a user study with 24 participants, demonstrate that our approach can achieve better performance than multiple SOTA approaches. Our code and datasets are available at https://github.com/magic-YuanTian/STEPS.",
}
| Relational databases play an important role in business, science, and more. However, many users cannot fully unleash the analytical power of relational databases, because they are not familiar with database languages such as SQL. Many techniques have been proposed to automatically generate SQL from natural language, but they suffer from two issues: (1) they still make many mistakes, particularly for complex queries, and (2) they do not provide a flexible way for non-expert users to validate and refine incorrect queries. To address these issues, we introduce a new interaction mechanism that allows users to directly edit a step-by-step explanation of a query to fix errors. Our experiments on multiple datasets, as well as a user study with 24 participants, demonstrate that our approach can achieve better performance than multiple SOTA approaches. Our code and datasets are available at https://github.com/magic-YuanTian/STEPS. | [
"Tian, Yuan",
"Zhang, Zheng",
"Ning, Zheng",
"Li, Toby",
"Kummerfeld, Jonathan K.",
"Zhang, Tianyi"
] | Interactive Text-to-SQL Generation via Editable Step-by-Step Explanations | emnlp-main.1004 | 2305.07372 | [
"https://github.com/magic-yuantian/steps"
] | https://huggingface.co/papers/2305.07372 | 1 | 0 | 0 | 6 | [
"DoctorChaos/text-to-SQL-clause-smbop"
] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.1005.bib | https://aclanthology.org/2023.emnlp-main.1005/ | @inproceedings{liu-etal-2023-coco,
title = "{C}o{C}o: Coherence-Enhanced Machine-Generated Text Detection Under Low Resource With Contrastive Learning",
author = "Liu, Xiaoming and
Zhang, Zhaohan and
Wang, Yichen and
Pu, Hang and
Lan, Yu and
Shen, Chao",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1005",
doi = "10.18653/v1/2023.emnlp-main.1005",
pages = "16167--16188",
abstract = "Machine-Generated Text (MGT) detection, a task that discriminates MGT from Human-Written Text (HWT), plays a crucial role in preventing misuse of text generative models, which excel in mimicking human writing style recently. Latest proposed detectors usually take coarse text sequences as input and fine-tune pretrained models with standard cross-entropy loss. However, these methods fail to consider the linguistic structure of texts. Moreover, they lack the ability to handle the low-resource problem which could often happen in practice considering the enormous amount of textual data online. In this paper, we present a coherence-based contrastive learning model named CoCo to detect the possible MGT under low-resource scenario. To exploit the linguistic feature, we encode coherence information in form of graph into text representation. To tackle the challenges of low data resource, we employ a contrastive learning framework and propose an improved contrastive loss for preventing performance degradation brought by simple samples. The experiment results on two public datasets and two self-constructed datasets prove our approach outperforms the state-of-art methods significantly. Also, we surprisingly find that MGTs originated from up-to-date language models could be easier to detect than these from previous models, in our experiments. And we propose some preliminary explanations for this counter-intuitive phenomena. All the codes and datasets are open-sourced.",
}
| Machine-Generated Text (MGT) detection, a task that discriminates MGT from Human-Written Text (HWT), plays a crucial role in preventing misuse of text generative models, which excel in mimicking human writing style recently. Latest proposed detectors usually take coarse text sequences as input and fine-tune pretrained models with standard cross-entropy loss. However, these methods fail to consider the linguistic structure of texts. Moreover, they lack the ability to handle the low-resource problem which could often happen in practice considering the enormous amount of textual data online. In this paper, we present a coherence-based contrastive learning model named CoCo to detect the possible MGT under low-resource scenario. To exploit the linguistic feature, we encode coherence information in form of graph into text representation. To tackle the challenges of low data resource, we employ a contrastive learning framework and propose an improved contrastive loss for preventing performance degradation brought by simple samples. The experiment results on two public datasets and two self-constructed datasets prove our approach outperforms the state-of-art methods significantly. Also, we surprisingly find that MGTs originated from up-to-date language models could be easier to detect than these from previous models, in our experiments. And we propose some preliminary explanations for this counter-intuitive phenomena. All the codes and datasets are open-sourced. | [
"Liu, Xiaoming",
"Zhang, Zhaohan",
"Wang, Yichen",
"Pu, Hang",
"Lan, Yu",
"Shen, Chao"
] | CoCo: Coherence-Enhanced Machine-Generated Text Detection Under Low Resource With Contrastive Learning | emnlp-main.1005 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1006.bib | https://aclanthology.org/2023.emnlp-main.1006/ | @inproceedings{zhao-etal-2023-anytod,
title = "{A}ny{TOD}: A Programmable Task-Oriented Dialog System",
author = "Zhao, Jeffrey and
Cao, Yuan and
Gupta, Raghav and
Lee, Harrison and
Rastogi, Abhinav and
Wang, Mingqiu and
Soltau, Hagen and
Shafran, Izhak and
Wu, Yonghui",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1006",
doi = "10.18653/v1/2023.emnlp-main.1006",
pages = "16189--16204",
abstract = "We propose AnyTOD, an end-to-end, zero-shot task-oriented dialog (TOD) system capable of zero-shot adaptation onto unseen tasks or domains. We view TOD as a program executed by a language model (LM), where program logic and ontology is provided by a designer as a schema. To enable generalization to unseen schemas and programs without prior training, AnyTOD adopts a neuro-symbolic approach. A neural LM keeps track of events that occur during a conversation, and a symbolic program implementing dialog policy is executed to recommend actions AnyTOD should take. This approach drastically reduces data annotation and model training requirements, addressing the enduring challenge of rapidly adapting a TOD system to unseen tasks and domains. We demonstrate state-of-the-art results on STAR, ABCD and SGD benchmarks. We also demonstrate strong zero-shot transfer ability in low-resource settings, such as zero-shot transfer onto MultiWOZ. In addition, we release STARv2, an updated version of the STAR dataset with richer annotations, for benchmarking zero-shot task transfer for end-to-end TOD models.",
}
| We propose AnyTOD, an end-to-end, zero-shot task-oriented dialog (TOD) system capable of zero-shot adaptation onto unseen tasks or domains. We view TOD as a program executed by a language model (LM), where program logic and ontology is provided by a designer as a schema. To enable generalization to unseen schemas and programs without prior training, AnyTOD adopts a neuro-symbolic approach. A neural LM keeps track of events that occur during a conversation, and a symbolic program implementing dialog policy is executed to recommend actions AnyTOD should take. This approach drastically reduces data annotation and model training requirements, addressing the enduring challenge of rapidly adapting a TOD system to unseen tasks and domains. We demonstrate state-of-the-art results on STAR, ABCD and SGD benchmarks. We also demonstrate strong zero-shot transfer ability in low-resource settings, such as zero-shot transfer onto MultiWOZ. In addition, we release STARv2, an updated version of the STAR dataset with richer annotations, for benchmarking zero-shot task transfer for end-to-end TOD models. | [
"Zhao, Jeffrey",
"Cao, Yuan",
"Gupta, Raghav",
"Lee, Harrison",
"Rastogi, Abhinav",
"Wang, Mingqiu",
"Soltau, Hagen",
"Shafran, Izhak",
"Wu, Yonghui"
] | AnyTOD: A Programmable Task-Oriented Dialog System | emnlp-main.1006 | 2212.09939 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1007.bib | https://aclanthology.org/2023.emnlp-main.1007/ | @inproceedings{cheang-etal-2023-lms,
title = "Can {LM}s Generalize to Future Data? An Empirical Analysis on Text Summarization",
author = "Cheang, Chi and
Chan, Hou and
Wong, Derek and
Liu, Xuebo and
Li, Zhaocong and
Sun, Yanming and
Liu, Shudong and
Chao, Lidia",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1007",
doi = "10.18653/v1/2023.emnlp-main.1007",
pages = "16205--16217",
abstract = "Recent pre-trained language models (PLMs) achieve promising results in existing abstractive summarization datasets. However, existing summarization benchmarks overlap in time with the standard pre-training corpora and finetuning datasets. Hence, the strong performance of PLMs may rely on the parametric knowledge that is memorized during pre-training and fine-tuning. Moreover, the knowledge memorized by PLMs may quickly become outdated, which affects the generalization performance of PLMs on future data. In this work, we propose TempoSum, a novel benchmark that contains data samples from 2010 to 2022, to understand the temporal generalization ability of abstractive summarization models. Through extensive human evaluation, we show that parametric knowledge stored in summarization models significantly affects the faithfulness of the generated summaries on future data. Moreover, existing faithfulness enhancement methods cannot reliably improve the faithfulness of summarization models on future data. Finally, we discuss several recommendations to the research community on how to evaluate and improve the temporal generalization capability of text summarization models.",
}
| Recent pre-trained language models (PLMs) achieve promising results in existing abstractive summarization datasets. However, existing summarization benchmarks overlap in time with the standard pre-training corpora and finetuning datasets. Hence, the strong performance of PLMs may rely on the parametric knowledge that is memorized during pre-training and fine-tuning. Moreover, the knowledge memorized by PLMs may quickly become outdated, which affects the generalization performance of PLMs on future data. In this work, we propose TempoSum, a novel benchmark that contains data samples from 2010 to 2022, to understand the temporal generalization ability of abstractive summarization models. Through extensive human evaluation, we show that parametric knowledge stored in summarization models significantly affects the faithfulness of the generated summaries on future data. Moreover, existing faithfulness enhancement methods cannot reliably improve the faithfulness of summarization models on future data. Finally, we discuss several recommendations to the research community on how to evaluate and improve the temporal generalization capability of text summarization models. | [
"Cheang, Chi",
"Chan, Hou",
"Wong, Derek",
"Liu, Xuebo",
"Li, Zhaocong",
"Sun, Yanming",
"Liu, Shudong",
"Chao, Lidia"
] | Can LMs Generalize to Future Data? An Empirical Analysis on Text Summarization | emnlp-main.1007 | 2305.01951 | [
"https://github.com/nlp2ct/temposum"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1008.bib | https://aclanthology.org/2023.emnlp-main.1008/ | @inproceedings{sarkar-etal-2023-zero,
title = "Zero-Shot Multi-Label Topic Inference with Sentence Encoders and {LLM}s",
author = "Sarkar, Souvika and
Feng, Dongji and
Karmaker Santu, Shubhra Kanti",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1008",
doi = "10.18653/v1/2023.emnlp-main.1008",
pages = "16218--16233",
abstract = "In this paper, we conducted a comprehensive study with the latest Sentence Encoders and Large Language Models (LLMs) on the challenging task of {``}definition-wild zero-shot topic inference{''}, where users define or provide the topics of interest in real-time. Through extensive experimentation on seven diverse data sets, we observed that LLMs, such as ChatGPT-3.5 and PaLM, demonstrated superior generality compared to other LLMs, e.g., BLOOM and GPT-NeoX. Furthermore, Sentence-BERT, a BERT-based classical sentence encoder, outperformed PaLM and achieved performance comparable to ChatGPT-3.5.",
}
| In this paper, we conducted a comprehensive study with the latest Sentence Encoders and Large Language Models (LLMs) on the challenging task of {``}definition-wild zero-shot topic inference{''}, where users define or provide the topics of interest in real-time. Through extensive experimentation on seven diverse data sets, we observed that LLMs, such as ChatGPT-3.5 and PaLM, demonstrated superior generality compared to other LLMs, e.g., BLOOM and GPT-NeoX. Furthermore, Sentence-BERT, a BERT-based classical sentence encoder, outperformed PaLM and achieved performance comparable to ChatGPT-3.5. | [
"Sarkar, Souvika",
"Feng, Dongji",
"Karmaker Santu, Shubhra Kanti"
] | Zero-Shot Multi-Label Topic Inference with Sentence Encoders and LLMs | emnlp-main.1008 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1009.bib | https://aclanthology.org/2023.emnlp-main.1009/ | @inproceedings{bhaumik-etal-2023-taskdiff,
title = "{T}ask{D}iff: A Similarity Metric for Task-Oriented Conversations",
author = "Bhaumik, Ankita and
Venkateswaran, Praveen and
Rizk, Yara and
Isahagian, Vatche",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1009",
doi = "10.18653/v1/2023.emnlp-main.1009",
pages = "16234--16240",
abstract = "The popularity of conversational digital assistants has resulted in the availability of large amounts of conversational data which can be utilized for improved user experience and personalized response generation. Building these assistants using popular large language models like ChatGPT also require additional emphasis on prompt engineering and evaluation methods. Textual similarity metrics are a key ingredient for such analysis and evaluations. While many similarity metrics have been proposed in the literature, they have not proven effective for task-oriented conversations as they do not take advantage of unique conversational features. To address this gap, we present TaskDiff, a novel conversational similarity metric that utilizes different dialogue components (utterances, intents, and slots) and their distributions to compute similarity. Extensive experimental evaluation of TaskDiff on a benchmark dataset demonstrates its superior performance and improved robustness over other related approaches.",
}
| The popularity of conversational digital assistants has resulted in the availability of large amounts of conversational data which can be utilized for improved user experience and personalized response generation. Building these assistants using popular large language models like ChatGPT also require additional emphasis on prompt engineering and evaluation methods. Textual similarity metrics are a key ingredient for such analysis and evaluations. While many similarity metrics have been proposed in the literature, they have not proven effective for task-oriented conversations as they do not take advantage of unique conversational features. To address this gap, we present TaskDiff, a novel conversational similarity metric that utilizes different dialogue components (utterances, intents, and slots) and their distributions to compute similarity. Extensive experimental evaluation of TaskDiff on a benchmark dataset demonstrates its superior performance and improved robustness over other related approaches. | [
"Bhaumik, Ankita",
"Venkateswaran, Praveen",
"Rizk, Yara",
"Isahagian, Vatche"
] | TaskDiff: A Similarity Metric for Task-Oriented Conversations | emnlp-main.1009 | 2310.15298 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1010.bib | https://aclanthology.org/2023.emnlp-main.1010/ | @inproceedings{sung-etal-2023-fake,
title = "Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines",
author = "Sung, Yoo Yeon and
Boyd-Graber, Jordan and
Hassan, Naeemul",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1010",
doi = "10.18653/v1/2023.emnlp-main.1010",
pages = "16241--16258",
abstract = "Polarization and the marketplace for impressions have conspired to make navigating information online difficult for users, and while there has been a significant effort to detect false or misleading text, multimodal datasets have received considerably less attention. To complement existing resources, we present multimodal Video Misleading Headline (VMH), a dataset that consists of videos and whether annotators believe the headline is representative of the video{'}s contents. After collecting and annotating this dataset, we analyze multimodal baselines for detecting misleading headlines. Our annotation process also focuses on why annotators view a video as misleading, allowing us to better understand the interplay of annotators{'} background and the content of the videos.",
}
| Polarization and the marketplace for impressions have conspired to make navigating information online difficult for users, and while there has been a significant effort to detect false or misleading text, multimodal datasets have received considerably less attention. To complement existing resources, we present multimodal Video Misleading Headline (VMH), a dataset that consists of videos and whether annotators believe the headline is representative of the video{'}s contents. After collecting and annotating this dataset, we analyze multimodal baselines for detecting misleading headlines. Our annotation process also focuses on why annotators view a video as misleading, allowing us to better understand the interplay of annotators{'} background and the content of the videos. | [
"Sung, Yoo Yeon",
"Boyd-Graber, Jordan",
"Hassan, Naeemul"
] | Not all Fake News is Written: A Dataset and Analysis of Misleading Video Headlines | emnlp-main.1010 | 2310.13859 | [
"https://github.com/yysung/vmh"
] | https://huggingface.co/papers/2310.13859 | 0 | 0 | 0 | 3 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.1011.bib | https://aclanthology.org/2023.emnlp-main.1011/ | @inproceedings{petrak-etal-2023-learning,
title = "Learning From Free-Text Human Feedback {--} Collect New Datasets Or Extend Existing Ones?",
author = "Petrak, Dominic and
Moosavi, Nafise and
Tian, Ye and
Rozanov, Nikolai and
Gurevych, Iryna",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1011",
doi = "10.18653/v1/2023.emnlp-main.1011",
pages = "16259--16279",
abstract = "Continuous learning from free-text human feedback, such as error corrections, new knowledge, or alternative responses, is essential for today{'}s chatbots and virtual assistants to stay up-to-date, engaging, and socially acceptable. However, for research on methods for learning from such data, annotated data is scarce. To address this, we examine the error and user response types of six popular dialogue datasets from various types, including MultiWoZ, PersonaChat, Wizards-of-Wikipedia, and others, to assess their extendibility with the needed annotations. For this corpus study, we manually annotate a subset of each dataset with error and user response types using an improved version of the Integrated Error Taxonomy and a newly proposed user response type taxonomy. We provide the resulting dataset (EURTAD) to the community. Our findings provide new insights into dataset composition, including error types, user response types, and the relations between them.",
}
| Continuous learning from free-text human feedback, such as error corrections, new knowledge, or alternative responses, is essential for today{'}s chatbots and virtual assistants to stay up-to-date, engaging, and socially acceptable. However, for research on methods for learning from such data, annotated data is scarce. To address this, we examine the error and user response types of six popular dialogue datasets from various types, including MultiWoZ, PersonaChat, Wizards-of-Wikipedia, and others, to assess their extendibility with the needed annotations. For this corpus study, we manually annotate a subset of each dataset with error and user response types using an improved version of the Integrated Error Taxonomy and a newly proposed user response type taxonomy. We provide the resulting dataset (EURTAD) to the community. Our findings provide new insights into dataset composition, including error types, user response types, and the relations between them. | [
"Petrak, Dominic",
"Moosavi, Nafise",
"Tian, Ye",
"Rozanov, Nikolai",
"Gurevych, Iryna"
] | Learning From Free-Text Human Feedback – Collect New Datasets Or Extend Existing Ones? | emnlp-main.1011 | [
"https://github.com/ukplab/emnlp2023-learning-from-free-text-human-feedback"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1012.bib | https://aclanthology.org/2023.emnlp-main.1012/ | @inproceedings{wiegand-etal-2023-euphemistic,
title = "Euphemistic Abuse {--} A New Dataset and Classification Experiments for Implicitly Abusive Language",
author = "Wiegand, Michael and
Kampfmeier, Jana and
Eder, Elisabeth and
Ruppenhofer, Josef",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1012",
doi = "10.18653/v1/2023.emnlp-main.1012",
pages = "16280--16297",
abstract = "We address the task of identifying euphemistic abuse (e.g. {``}You inspire me to fall asleep{''}) paraphrasing simple explicitly abusive utterances (e.g. {``}You are boring{''}). For this task, we introduce a novel dataset that has been created via crowdsourcing. Special attention has been paid to the generation of appropriate negative (non-abusive) data. We report on classification experiments showing that classifiers trained on previous datasets are less capable of detecting such abuse. Best automatic results are obtained by a classifier that augments training data from our new dataset with automatically-generated GPT-3 completions. We also present a classifier that combines a few manually extracted features that exemplify the major linguistic phenomena constituting euphemistic abuse.",
}
| We address the task of identifying euphemistic abuse (e.g. {``}You inspire me to fall asleep{''}) paraphrasing simple explicitly abusive utterances (e.g. {``}You are boring{''}). For this task, we introduce a novel dataset that has been created via crowdsourcing. Special attention has been paid to the generation of appropriate negative (non-abusive) data. We report on classification experiments showing that classifiers trained on previous datasets are less capable of detecting such abuse. Best automatic results are obtained by a classifier that augments training data from our new dataset with automatically-generated GPT-3 completions. We also present a classifier that combines a few manually extracted features that exemplify the major linguistic phenomena constituting euphemistic abuse. | [
"Wieg",
", Michael",
"Kampfmeier, Jana",
"Eder, Elisabeth",
"Ruppenhofer, Josef"
] | Euphemistic Abuse – A New Dataset and Classification Experiments for Implicitly Abusive Language | emnlp-main.1012 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1013.bib | https://aclanthology.org/2023.emnlp-main.1013/ | @inproceedings{arakelyan-etal-2023-exploring,
title = "Exploring Distributional Shifts in Large Language Models for Code Analysis",
author = "Arakelyan, Shushan and
Das, Rocktim and
Mao, Yi and
Ren, Xiang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1013",
doi = "10.18653/v1/2023.emnlp-main.1013",
pages = "16298--16314",
abstract = "We systematically study how three large language models with code capabilities - CodeT5, Codex, and ChatGPT - generalize to out-of-domain data. We consider two fundamental applications - code summarization, and code generation. We split data into domains following its natural boundaries - by an organization, by a project, and by a module within the software project. We establish that samples from each new domain present all the models with a significant challenge of distribution shift. We study how established methods adapt models to better generalize to new domains. Our experiments show that while multitask learning alone is a reasonable baseline, combining it with few-shot finetuning on examples retrieved from training data can achieve very strong performance. Moreover, this solution can outperform direct finetuning for very low-data scenarios. Finally, we consider variations of this approach to create a more broadly applicable method to adapt to multiple domains at once. We find that for code generation, a model adapted to multiple domains simultaneously performs on par with those adapted to a single domain.",
}
| We systematically study how three large language models with code capabilities - CodeT5, Codex, and ChatGPT - generalize to out-of-domain data. We consider two fundamental applications - code summarization, and code generation. We split data into domains following its natural boundaries - by an organization, by a project, and by a module within the software project. We establish that samples from each new domain present all the models with a significant challenge of distribution shift. We study how established methods adapt models to better generalize to new domains. Our experiments show that while multitask learning alone is a reasonable baseline, combining it with few-shot finetuning on examples retrieved from training data can achieve very strong performance. Moreover, this solution can outperform direct finetuning for very low-data scenarios. Finally, we consider variations of this approach to create a more broadly applicable method to adapt to multiple domains at once. We find that for code generation, a model adapted to multiple domains simultaneously performs on par with those adapted to a single domain. | [
"Arakelyan, Shushan",
"Das, Rocktim",
"Mao, Yi",
"Ren, Xiang"
] | Exploring Distributional Shifts in Large Language Models for Code Analysis | emnlp-main.1013 | 2303.09128 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1014.bib | https://aclanthology.org/2023.emnlp-main.1014/ | @inproceedings{kim-etal-2023-athena,
title = "{ATHENA}: Mathematical Reasoning with Thought Expansion",
author = "Kim, Jb. and
Kim, Hazel and
Hahn, Joonghyuk and
Han, Yo-Sub",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1014",
doi = "10.18653/v1/2023.emnlp-main.1014",
pages = "16315--16327",
abstract = "Solving math word problems depends on how to articulate the problems, the lens through which models view human linguistic expressions. Real-world settings count on such a method even more due to the diverse practices of the same mathematical operations. Earlier works constrain available thinking processes by limited prediction strategies without considering their significance in acquiring mathematical knowledge. We introduce Attention-based THought Expansion Network Architecture (ATHENA) to tackle the challenges of real-world practices by mimicking human thought expansion mechanisms in the form of neural network propagation. A thought expansion recurrently generates the candidates carrying the thoughts of possible math expressions driven from the previous step and yields reasonable thoughts by selecting the valid pathways to the goal. Our experiments show that ATHENA achieves a new state-of-the-art stage toward the ideal model that is compelling in variant questions even when the informativeness in training examples is restricted.",
}
| Solving math word problems depends on how to articulate the problems, the lens through which models view human linguistic expressions. Real-world settings count on such a method even more due to the diverse practices of the same mathematical operations. Earlier works constrain available thinking processes by limited prediction strategies without considering their significance in acquiring mathematical knowledge. We introduce Attention-based THought Expansion Network Architecture (ATHENA) to tackle the challenges of real-world practices by mimicking human thought expansion mechanisms in the form of neural network propagation. A thought expansion recurrently generates the candidates carrying the thoughts of possible math expressions driven from the previous step and yields reasonable thoughts by selecting the valid pathways to the goal. Our experiments show that ATHENA achieves a new state-of-the-art stage toward the ideal model that is compelling in variant questions even when the informativeness in training examples is restricted. | [
"Kim, Jb.",
"Kim, Hazel",
"Hahn, Joonghyuk",
"Han, Yo-Sub"
] | ATHENA: Mathematical Reasoning with Thought Expansion | emnlp-main.1014 | 2311.01036 | [
"https://github.com/the-jb/athena-math"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1015.bib | https://aclanthology.org/2023.emnlp-main.1015/ | @inproceedings{comsa-narayanan-2023-benchmark,
title = "A Benchmark for Reasoning with Spatial Prepositions",
author = "Comsa, Iulia and
Narayanan, Srini",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1015",
doi = "10.18653/v1/2023.emnlp-main.1015",
pages = "16328--16335",
abstract = "Spatial reasoning is a fundamental building block of human cognition, used in representing, grounding, and reasoning about physical and abstract concepts. We propose a novel benchmark focused on assessing inferential properties of statements with spatial prepositions. The benchmark includes original datasets in English and Romanian and aims to probe the limits of reasoning about spatial relations in large language models. We use prompt engineering to study the performance of two families of large language models, PaLM and GPT-3, on our benchmark. Our results show considerable variability in the performance of smaller and larger models, as well as across prompts and languages. However, none of the models reaches human performance.",
}
| Spatial reasoning is a fundamental building block of human cognition, used in representing, grounding, and reasoning about physical and abstract concepts. We propose a novel benchmark focused on assessing inferential properties of statements with spatial prepositions. The benchmark includes original datasets in English and Romanian and aims to probe the limits of reasoning about spatial relations in large language models. We use prompt engineering to study the performance of two families of large language models, PaLM and GPT-3, on our benchmark. Our results show considerable variability in the performance of smaller and larger models, as well as across prompts and languages. However, none of the models reaches human performance. | [
"Comsa, Iulia",
"Narayanan, Srini"
] | A Benchmark for Reasoning with Spatial Prepositions | emnlp-main.1015 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1016.bib | https://aclanthology.org/2023.emnlp-main.1016/ | @inproceedings{alsayyahi-batista-navarro-2023-timeline,
title = "{TIMELINE}: Exhaustive Annotation of Temporal Relations Supporting the Automatic Ordering of Events in News Articles",
author = "Alsayyahi, Sarah and
Batista-Navarro, Riza",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1016",
doi = "10.18653/v1/2023.emnlp-main.1016",
pages = "16336--16348",
abstract = "Temporal relation extraction models have thus far been hindered by a number of issues in existing temporal relation-annotated news datasets, including: (1) low inter-annotator agreement due to the lack of specificity of their annotation guidelines in terms of what counts as a temporal relation; (2) the exclusion of long-distance relations within a given document (those spanning across different paragraphs); and (3) the exclusion of events that are not centred on verbs. This paper aims to alleviate these issues by presenting a new annotation scheme that clearly defines the criteria based on which temporal relations should be annotated. Additionally, the scheme includes events even if they are not expressed as verbs (e.g., nominalised events). Furthermore, we propose a method for annotating all temporal relations{---}including long-distance ones{---}which automates the process, hence reducing time and manual effort on the part of annotators. The result is a new dataset, the TIMELINE corpus, in which improved inter-annotator agreement was obtained, in comparison with previously reported temporal relation datasets. We report the results of training and evaluating two baseline temporal relation extraction models on the new corpus, and compare them with results obtained on the widely used MATRES corpus.",
}
| Temporal relation extraction models have thus far been hindered by a number of issues in existing temporal relation-annotated news datasets, including: (1) low inter-annotator agreement due to the lack of specificity of their annotation guidelines in terms of what counts as a temporal relation; (2) the exclusion of long-distance relations within a given document (those spanning across different paragraphs); and (3) the exclusion of events that are not centred on verbs. This paper aims to alleviate these issues by presenting a new annotation scheme that clearly defines the criteria based on which temporal relations should be annotated. Additionally, the scheme includes events even if they are not expressed as verbs (e.g., nominalised events). Furthermore, we propose a method for annotating all temporal relations{---}including long-distance ones{---}which automates the process, hence reducing time and manual effort on the part of annotators. The result is a new dataset, the TIMELINE corpus, in which improved inter-annotator agreement was obtained, in comparison with previously reported temporal relation datasets. We report the results of training and evaluating two baseline temporal relation extraction models on the new corpus, and compare them with results obtained on the widely used MATRES corpus. | [
"Alsayyahi, Sarah",
"Batista-Navarro, Riza"
] | TIMELINE: Exhaustive Annotation of Temporal Relations Supporting the Automatic Ordering of Events in News Articles | emnlp-main.1016 | 2310.17802 | [
"https://github.com/alsayyahi/timeline"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1017.bib | https://aclanthology.org/2023.emnlp-main.1017/ | @inproceedings{song-etal-2023-mitigating,
title = "Mitigating Over-Generation for Unsupervised Keyphrase Extraction with Heterogeneous Centrality Detection",
author = "Song, Mingyang and
Xu, Pengyu and
Feng, Yi and
Liu, Huafeng and
Jing, Liping",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1017",
doi = "10.18653/v1/2023.emnlp-main.1017",
pages = "16349--16359",
abstract = "Over-generation errors occur when a keyphrase extraction model correctly determines a candidate keyphrase as a keyphrase because it contains a word that frequently appears in the document but at the same time erroneously outputs other candidates as keyphrases because they contain the same word. To mitigate this issue, we propose a new heterogeneous centrality detection approach (CentralityRank), which extracts keyphrases by simultaneously identifying both implicit and explicit centrality within a heterogeneous graph as the importance score of each candidate. More specifically, CentralityRank detects centrality by taking full advantage of the content within the input document to construct graphs that encompass semantic nodes of varying granularity levels, not limited to just phrases. These additional nodes act as intermediaries between candidate keyphrases, enhancing cross-phrase relations. Furthermore, we introduce a novel adaptive boundary-aware regularization that can leverage the position information of candidate keyphrases, thus influencing the importance of candidate keyphrases. Extensive experimental results demonstrate the superiority of CentralityRank over recent state-of-the-art unsupervised keyphrase extraction baselines across three benchmark datasets.",
}
| Over-generation errors occur when a keyphrase extraction model correctly determines a candidate keyphrase as a keyphrase because it contains a word that frequently appears in the document but at the same time erroneously outputs other candidates as keyphrases because they contain the same word. To mitigate this issue, we propose a new heterogeneous centrality detection approach (CentralityRank), which extracts keyphrases by simultaneously identifying both implicit and explicit centrality within a heterogeneous graph as the importance score of each candidate. More specifically, CentralityRank detects centrality by taking full advantage of the content within the input document to construct graphs that encompass semantic nodes of varying granularity levels, not limited to just phrases. These additional nodes act as intermediaries between candidate keyphrases, enhancing cross-phrase relations. Furthermore, we introduce a novel adaptive boundary-aware regularization that can leverage the position information of candidate keyphrases, thus influencing the importance of candidate keyphrases. Extensive experimental results demonstrate the superiority of CentralityRank over recent state-of-the-art unsupervised keyphrase extraction baselines across three benchmark datasets. | [
"Song, Mingyang",
"Xu, Pengyu",
"Feng, Yi",
"Liu, Huafeng",
"Jing, Liping"
] | Mitigating Over-Generation for Unsupervised Keyphrase Extraction with Heterogeneous Centrality Detection | emnlp-main.1017 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1018.bib | https://aclanthology.org/2023.emnlp-main.1018/ | @inproceedings{liu-etal-2023-towards-interpretable,
title = "Towards Interpretable and Efficient Automatic Reference-Based Summarization Evaluation",
author = "Liu, Yixin and
Fabbri, Alexander and
Zhao, Yilun and
Liu, Pengfei and
Joty, Shafiq and
Wu, Chien-Sheng and
Xiong, Caiming and
Radev, Dragomir",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1018",
doi = "10.18653/v1/2023.emnlp-main.1018",
pages = "16360--16368",
abstract = "Interpretability and efficiency are two important considerations for the adoption of neural automatic metrics. In this work, we develop strong-performing automatic metrics for reference-based summarization evaluation, based on a two-stage evaluation pipeline that first extracts basic information units from one text sequence and then checks the extracted units in another sequence. The metrics we developed include two-stage metrics that can provide high interpretability at both the fine-grained unit level and summary level, and one-stage metrics that achieve a balance between efficiency and interpretability. We make the developed tools publicly available at https://github.com/Yale-LILY/AutoACU.",
}
| Interpretability and efficiency are two important considerations for the adoption of neural automatic metrics. In this work, we develop strong-performing automatic metrics for reference-based summarization evaluation, based on a two-stage evaluation pipeline that first extracts basic information units from one text sequence and then checks the extracted units in another sequence. The metrics we developed include two-stage metrics that can provide high interpretability at both the fine-grained unit level and summary level, and one-stage metrics that achieve a balance between efficiency and interpretability. We make the developed tools publicly available at https://github.com/Yale-LILY/AutoACU. | [
"Liu, Yixin",
"Fabbri, Alex",
"er",
"Zhao, Yilun",
"Liu, Pengfei",
"Joty, Shafiq",
"Wu, Chien-Sheng",
"Xiong, Caiming",
"Radev, Dragomir"
] | Towards Interpretable and Efficient Automatic Reference-Based Summarization Evaluation | emnlp-main.1018 | 2303.03608 | [
"https://github.com/yale-lily/autoacu"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1019.bib | https://aclanthology.org/2023.emnlp-main.1019/ | @inproceedings{wang-etal-2023-maud,
title = "{MAUD}: An Expert-Annotated Legal {NLP} Dataset for Merger Agreement Understanding",
author = "Wang, Steven and
Scardigli, Antoine and
Tang, Leonard and
Chen, Wei and
Levkin, Dmitry and
Chen, Anya and
Ball, Spencer and
Woodside, Thomas and
Zhang, Oliver and
Hendrycks, Dan",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1019",
doi = "10.18653/v1/2023.emnlp-main.1019",
pages = "16369--16382",
abstract = "Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association{'}s 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community.",
}
| Reading comprehension of legal text can be a particularly challenging task due to the length and complexity of legal clauses and a shortage of expert-annotated datasets. To address this challenge, we introduce the Merger Agreement Understanding Dataset (MAUD), an expert-annotated reading comprehension dataset based on the American Bar Association{'}s 2021 Public Target Deal Points Study, with over 39,000 examples and over 47,000 total annotations. Our fine-tuned Transformer baselines show promising results, with models performing well above random on most questions. However, on a large subset of questions, there is still room for significant improvement. As the only expert-annotated merger agreement dataset, MAUD is valuable as a benchmark for both the legal profession and the NLP community. | [
"Wang, Steven",
"Scardigli, Antoine",
"Tang, Leonard",
"Chen, Wei",
"Levkin, Dmitry",
"Chen, Anya",
"Ball, Spencer",
"Woodside, Thomas",
"Zhang, Oliver",
"Hendrycks, Dan"
] | MAUD: An Expert-Annotated Legal NLP Dataset for Merger Agreement Understanding | emnlp-main.1019 | 2301.00876 | [
"https://github.com/theatticusproject/maud-extraction"
] | https://huggingface.co/papers/2301.00876 | 0 | 0 | 0 | 10 | [
"TracyWang/MAUD_KWM_AWS_Roberta-base"
] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.1020.bib | https://aclanthology.org/2023.emnlp-main.1020/ | @inproceedings{oh-etal-2023-pk,
title = "{PK}-{ICR}: Persona-Knowledge Interactive Multi-Context Retrieval for Grounded Dialogue",
author = "Oh, Minsik and
Lee, Joosung and
Li, Jiwei and
Wang, Guoyin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1020",
doi = "10.18653/v1/2023.emnlp-main.1020",
pages = "16383--16395",
abstract = "Identifying relevant persona or knowledge for conversational systems is critical to grounded dialogue response generation. However, each grounding has been mostly researched in isolation with more practical multi-context dialogue tasks introduced in recent works. We define Persona and Knowledge Dual Context Identification as the task to identify persona and knowledge jointly for a given dialogue, which could be of elevated importance in complex multi-context dialogue settings. We develop a novel grounding retrieval method that utilizes all contexts of dialogue simultaneously. Our method requires less computational power via utilizing neural QA retrieval models. We further introduce our novel null-positive rank test which measures ranking performance on semantically dissimilar samples (i.e. hard negatives) in relation to data augmentation.",
}
| Identifying relevant persona or knowledge for conversational systems is critical to grounded dialogue response generation. However, each grounding has been mostly researched in isolation with more practical multi-context dialogue tasks introduced in recent works. We define Persona and Knowledge Dual Context Identification as the task to identify persona and knowledge jointly for a given dialogue, which could be of elevated importance in complex multi-context dialogue settings. We develop a novel grounding retrieval method that utilizes all contexts of dialogue simultaneously. Our method requires less computational power via utilizing neural QA retrieval models. We further introduce our novel null-positive rank test which measures ranking performance on semantically dissimilar samples (i.e. hard negatives) in relation to data augmentation. | [
"Oh, Minsik",
"Lee, Joosung",
"Li, Jiwei",
"Wang, Guoyin"
] | PK-ICR: Persona-Knowledge Interactive Multi-Context Retrieval for Grounded Dialogue | emnlp-main.1020 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1021.bib | https://aclanthology.org/2023.emnlp-main.1021/ | @inproceedings{yu-etal-2023-spoken,
title = "More Than Spoken Words: Nonverbal Message Extraction and Generation",
author = "Yu, Dian and
Wang, Xiaoyang and
Chen, Wanshun and
Du, Nan and
Wang, Longyue and
Mi, Haitao and
Yu, Dong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1021",
doi = "10.18653/v1/2023.emnlp-main.1021",
pages = "16396--16413",
abstract = "Nonverbal messages (NM) such as speakers{'} facial expressions and speed of speech are essential for face-to-face communication, and they can be regarded as implicit knowledge as they are usually not included in existing dialogue understanding or generation tasks. This paper introduces the task of extracting NMs in written text and generating NMs for spoken text. Previous studies merely focus on extracting NMs from relatively small-scale well-structured corpora such as movie scripts wherein NMs are enclosed in parentheses by scriptwriters, which greatly decreases the difficulty of extraction. To enable extracting NMs from unstructured corpora, we annotate the first NM extraction dataset for Chinese based on novels and develop three baselines to extract single-span or multi-span NM of a target utterance from its surrounding context. Furthermore, we use the extractors to extract 749K (context, utterance, NM) triples from Chinese novels and investigate whether we can use them to improve NM generation via semi-supervised learning. Experimental results demonstrate that the automatically extracted triples can serve as high-quality augmentation data of clean triples extracted from scripts to generate more relevant, fluent, valid, and factually consistent NMs than the purely supervised generator, and the resulting generator can in turn help Chinese dialogue understanding tasks such as dialogue machine reading comprehension and emotion classification by simply adding the predicted {``}unspoken{''} NM to each utterance or narrative in inputs.",
}
| Nonverbal messages (NM) such as speakers{'} facial expressions and speed of speech are essential for face-to-face communication, and they can be regarded as implicit knowledge as they are usually not included in existing dialogue understanding or generation tasks. This paper introduces the task of extracting NMs in written text and generating NMs for spoken text. Previous studies merely focus on extracting NMs from relatively small-scale well-structured corpora such as movie scripts wherein NMs are enclosed in parentheses by scriptwriters, which greatly decreases the difficulty of extraction. To enable extracting NMs from unstructured corpora, we annotate the first NM extraction dataset for Chinese based on novels and develop three baselines to extract single-span or multi-span NM of a target utterance from its surrounding context. Furthermore, we use the extractors to extract 749K (context, utterance, NM) triples from Chinese novels and investigate whether we can use them to improve NM generation via semi-supervised learning. Experimental results demonstrate that the automatically extracted triples can serve as high-quality augmentation data of clean triples extracted from scripts to generate more relevant, fluent, valid, and factually consistent NMs than the purely supervised generator, and the resulting generator can in turn help Chinese dialogue understanding tasks such as dialogue machine reading comprehension and emotion classification by simply adding the predicted {``}unspoken{''} NM to each utterance or narrative in inputs. | [
"Yu, Dian",
"Wang, Xiaoyang",
"Chen, Wanshun",
"Du, Nan",
"Wang, Longyue",
"Mi, Haitao",
"Yu, Dong"
] | More Than Spoken Words: Nonverbal Message Extraction and Generation | emnlp-main.1021 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1022.bib | https://aclanthology.org/2023.emnlp-main.1022/ | @inproceedings{petersen-van-der-plas-2023-language,
title = "Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance",
author = "Petersen, Molly and
van der Plas, Lonneke",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1022",
doi = "10.18653/v1/2023.emnlp-main.1022",
pages = "16414--16425",
abstract = "While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans than those in commonly used NLP benchmarks. Our experiments find that models are able to learn analogical reasoning, even with a small amount of data. We additionally compare our models to a dataset with a human baseline, and find that after training models approach human performance.",
}
| While analogies are a common way to evaluate word embeddings in NLP, it is also of interest to investigate whether or not analogical reasoning is a task in itself that can be learned. In this paper, we test several ways to learn basic analogical reasoning, specifically focusing on analogies that are more typical of what is used to evaluate analogical reasoning in humans than those in commonly used NLP benchmarks. Our experiments find that models are able to learn analogical reasoning, even with a small amount of data. We additionally compare our models to a dataset with a human baseline, and find that after training models approach human performance. | [
"Petersen, Molly",
"van der Plas, Lonneke"
] | Can language models learn analogical reasoning? Investigating training objectives and comparisons to human performance | emnlp-main.1022 | 2310.05597 | [
"https://github.com/idiap/analogy_learning"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1023.bib | https://aclanthology.org/2023.emnlp-main.1023/ | @inproceedings{jacob-etal-2023-fame,
title = "{FAME}: Flexible, Scalable Analogy Mappings Engine",
author = "Jacob, Shahar and
Shani, Chen and
Shahaf, Dafna",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1023",
doi = "10.18653/v1/2023.emnlp-main.1023",
pages = "16426--16442",
abstract = "Analogy is one of the core capacities of human cognition; when faced with new situations, we often transfer prior experience from other domains. Most work on computational analogy relies heavily on complex, manually crafted input. In this work, we relax the input requirements, requiring only names of entities to be mapped. We automatically extract commonsense representations and use them to identify a mapping between the entities. Unlike previous works, our framework can handle partial analogies and suggest new entities to be added. Moreover, our method{'}s output is easily interpretable, allowing for users to understand why a specific mapping was chosen. Experiments show that our model correctly maps 81.2{\%} of classical 2x2 analogy problems (guess level=50{\%}). On larger problems, it achieves 77.8{\%} accuracy (mean guess level=13.1{\%}). In another experiment, we show our algorithm outperforms human performance, and the automatic suggestions of new entities resemble those suggested by humans. We hope this work will advance computational analogy by paving the way to more flexible, realistic input requirements, with broader applicability.",
}
| Analogy is one of the core capacities of human cognition; when faced with new situations, we often transfer prior experience from other domains. Most work on computational analogy relies heavily on complex, manually crafted input. In this work, we relax the input requirements, requiring only names of entities to be mapped. We automatically extract commonsense representations and use them to identify a mapping between the entities. Unlike previous works, our framework can handle partial analogies and suggest new entities to be added. Moreover, our method{'}s output is easily interpretable, allowing for users to understand why a specific mapping was chosen. Experiments show that our model correctly maps 81.2{\%} of classical 2x2 analogy problems (guess level=50{\%}). On larger problems, it achieves 77.8{\%} accuracy (mean guess level=13.1{\%}). In another experiment, we show our algorithm outperforms human performance, and the automatic suggestions of new entities resemble those suggested by humans. We hope this work will advance computational analogy by paving the way to more flexible, realistic input requirements, with broader applicability. | [
"Jacob, Shahar",
"Shani, Chen",
"Shahaf, Dafna"
] | FAME: Flexible, Scalable Analogy Mappings Engine | emnlp-main.1023 | 2311.01860 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1024.bib | https://aclanthology.org/2023.emnlp-main.1024/ | @inproceedings{wang-etal-2023-self-training,
title = "A Self-training Framework for Automated Medical Report Generation",
author = "Wang, Siyuan and
Liu, Zheng and
Peng, Bo",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1024",
doi = "10.18653/v1/2023.emnlp-main.1024",
pages = "16443--16449",
abstract = "Medical report generation, focusing on automatically generating accurate clinical findings from medical images, is an important medical artificial intelligence task. It reduces the workload of physicians in writing reports. Many of the current methods depend heavily on labeled datasets that include a large amount of image-report pairs, but such datasets labeled by physicians are hard to acquire in clinical practice. To this end, in this paper, we introduce a self-training framework named REMOTE (i.e., Revisiting sElf-training for Medical repOrT gEneration) to exploit the unlabeled medical images and a reference-free evaluation metric MedCLIPScore to augment a small-scale medical report generation dataset for training accurate medical report generation model. Experiments and analysis conducted on the MIMIC-CXR and IU-Xray benchmark datasets demonstrate that, our REMOTE framework, using 1{\%} labeled training data, achieves competitive performance with previous fully-supervised models that are trained on entire training data.",
}
| Medical report generation, focusing on automatically generating accurate clinical findings from medical images, is an important medical artificial intelligence task. It reduces the workload of physicians in writing reports. Many of the current methods depend heavily on labeled datasets that include a large amount of image-report pairs, but such datasets labeled by physicians are hard to acquire in clinical practice. To this end, in this paper, we introduce a self-training framework named REMOTE (i.e., Revisiting sElf-training for Medical repOrT gEneration) to exploit the unlabeled medical images and a reference-free evaluation metric MedCLIPScore to augment a small-scale medical report generation dataset for training accurate medical report generation model. Experiments and analysis conducted on the MIMIC-CXR and IU-Xray benchmark datasets demonstrate that, our REMOTE framework, using 1{\%} labeled training data, achieves competitive performance with previous fully-supervised models that are trained on entire training data. | [
"Wang, Siyuan",
"Liu, Zheng",
"Peng, Bo"
] | A Self-training Framework for Automated Medical Report Generation | emnlp-main.1024 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1025.bib | https://aclanthology.org/2023.emnlp-main.1025/ | @inproceedings{liu-etal-2023-picture,
title = "A Picture is Worth a Thousand Words: Language Models Plan from Pixels",
author = "Liu, Anthony and
Logeswaran, Lajanugen and
Sohn, Sungryull and
Lee, Honglak",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1025",
doi = "10.18653/v1/2023.emnlp-main.1025",
pages = "16450--16459",
abstract = "Planning is an important capability of artificial agents that perform long-horizon tasks in real-world environments. In this work, we explore the use of pre-trained language models (PLMs) to reason about plan sequences from text instructions in embodied visual environments. Prior PLM based approaches for planning either assume observations are available in the form of text by a captioning model, reason about plans from the instruction alone, or incorporate information about the visual environment in limited ways (such as a pre-trained affordance function). In contrast, we show that the PLM can accurately plan even when observations are directly encoded as input prompts for the PLM. We show this simple approach outperforms prior approaches in experiments on the ALFWorld and VirtualHome benchmarks.",
}
| Planning is an important capability of artificial agents that perform long-horizon tasks in real-world environments. In this work, we explore the use of pre-trained language models (PLMs) to reason about plan sequences from text instructions in embodied visual environments. Prior PLM based approaches for planning either assume observations are available in the form of text by a captioning model, reason about plans from the instruction alone, or incorporate information about the visual environment in limited ways (such as a pre-trained affordance function). In contrast, we show that the PLM can accurately plan even when observations are directly encoded as input prompts for the PLM. We show this simple approach outperforms prior approaches in experiments on the ALFWorld and VirtualHome benchmarks. | [
"Liu, Anthony",
"Logeswaran, Lajanugen",
"Sohn, Sungryull",
"Lee, Honglak"
] | A Picture is Worth a Thousand Words: Language Models Plan from Pixels | emnlp-main.1025 | 2303.09031 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1026.bib | https://aclanthology.org/2023.emnlp-main.1026/ | @inproceedings{li-etal-2023-interpreting,
title = "Interpreting and Exploiting Functional Specialization in Multi-Head Attention under Multi-task Learning",
author = "Li, Chong and
Wang, Shaonan and
Zhang, Yunhao and
Zhang, Jiajun and
Zong, Chengqing",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1026",
doi = "10.18653/v1/2023.emnlp-main.1026",
pages = "16460--16476",
abstract = "Transformer-based models, even though achieving super-human performance on several downstream tasks, are often regarded as a black box and used as a whole. It is still unclear what mechanisms they have learned, especially their core module: multi-head attention. Inspired by functional specialization in the human brain, which helps to efficiently handle multiple tasks, this work attempts to figure out whether the multi-head attention module will evolve similar function separation under multi-tasking training. If it is, can this mechanism further improve the model performance? To investigate these questions, we introduce an interpreting method to quantify the degree of functional specialization in multi-head attention. We further propose a simple multi-task training method to increase functional specialization and mitigate negative information transfer in multi-task learning. Experimental results on seven pre-trained transformer models have demonstrated that multi-head attention does evolve functional specialization phenomenon after multi-task training which is affected by the similarity of tasks. Moreover, the multi-task training strategy based on functional specialization boosts performance in both multi-task learning and transfer learning without adding any parameters.",
}
| Transformer-based models, even though achieving super-human performance on several downstream tasks, are often regarded as a black box and used as a whole. It is still unclear what mechanisms they have learned, especially their core module: multi-head attention. Inspired by functional specialization in the human brain, which helps to efficiently handle multiple tasks, this work attempts to figure out whether the multi-head attention module will evolve similar function separation under multi-tasking training. If it is, can this mechanism further improve the model performance? To investigate these questions, we introduce an interpreting method to quantify the degree of functional specialization in multi-head attention. We further propose a simple multi-task training method to increase functional specialization and mitigate negative information transfer in multi-task learning. Experimental results on seven pre-trained transformer models have demonstrated that multi-head attention does evolve functional specialization phenomenon after multi-task training which is affected by the similarity of tasks. Moreover, the multi-task training strategy based on functional specialization boosts performance in both multi-task learning and transfer learning without adding any parameters. | [
"Li, Chong",
"Wang, Shaonan",
"Zhang, Yunhao",
"Zhang, Jiajun",
"Zong, Chengqing"
] | Interpreting and Exploiting Functional Specialization in Multi-Head Attention under Multi-task Learning | emnlp-main.1026 | 2310.10318 | [
"https://github.com/znlp/functionalspecializationinmha"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1027.bib | https://aclanthology.org/2023.emnlp-main.1027/ | @inproceedings{pikuliak-etal-2023-multilingual,
title = "Multilingual Previously Fact-Checked Claim Retrieval",
author = "Pikuliak, Mat{\'u}{\v{s}} and
Srba, Ivan and
Moro, Robert and
Hromadka, Timo and
Smole{\v{n}}, Timotej and
Meli{\v{s}}ek, Martin and
Vykopal, Ivan and
Simko, Jakub and
Podrou{\v{z}}ek, Juraj and
Bielikova, Maria",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1027",
doi = "10.18653/v1/2023.emnlp-main.1027",
pages = "16477--16500",
abstract = "Fact-checkers are often hampered by the sheer amount of online content that needs to be fact-checked. NLP can help them by retrieving already existing fact-checks relevant to the content being investigated. This paper introduces a new multilingual dataset for previously fact-checked claim retrieval. We collected 28k posts in 27 languages from social media, 206k fact-checks in 39 languages written by professional fact-checkers, as well as 31k connections between these two groups. This is the most extensive and the most linguistically diverse dataset of this kind to date. We evaluated how different unsupervised methods fare on this dataset and its various dimensions. We show that evaluating such a diverse dataset has its complexities and proper care needs to be taken before interpreting the results. We also evaluated a supervised fine-tuning approach, improving upon the unsupervised method significantly.",
}
| Fact-checkers are often hampered by the sheer amount of online content that needs to be fact-checked. NLP can help them by retrieving already existing fact-checks relevant to the content being investigated. This paper introduces a new multilingual dataset for previously fact-checked claim retrieval. We collected 28k posts in 27 languages from social media, 206k fact-checks in 39 languages written by professional fact-checkers, as well as 31k connections between these two groups. This is the most extensive and the most linguistically diverse dataset of this kind to date. We evaluated how different unsupervised methods fare on this dataset and its various dimensions. We show that evaluating such a diverse dataset has its complexities and proper care needs to be taken before interpreting the results. We also evaluated a supervised fine-tuning approach, improving upon the unsupervised method significantly. | [
"Pikuliak, Mat{\\'u}{\\v{s}}",
"Srba, Ivan",
"Moro, Robert",
"Hromadka, Timo",
"Smole{\\v{n}}, Timotej",
"Meli{\\v{s}}ek, Martin",
"Vykopal, Ivan",
"Simko, Jakub",
"Podrou{\\v{z}}ek, Juraj",
"Bielikova, Maria"
] | Multilingual Previously Fact-Checked Claim Retrieval | emnlp-main.1027 | 2305.07991 | [
"https://github.com/kinit-sk/multiclaim"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1028.bib | https://aclanthology.org/2023.emnlp-main.1028/ | @inproceedings{he-etal-2023-alcap,
title = "{ALCAP}: Alignment-Augmented Music Captioner",
author = "He, Zihao and
Hao, Weituo and
Lu, Wei-Tsung and
Chen, Changyou and
Lerman, Kristina and
Song, Xuchen",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1028",
doi = "10.18653/v1/2023.emnlp-main.1028",
pages = "16501--16512",
abstract = "Music captioning has gained significant attention in the wake of the rising prominence of streaming media platforms. Traditional approaches often prioritize either the audio or lyrics aspect of the music, inadvertently ignoring the intricate interplay between the two. However, a comprehensive understanding of music necessitates the integration of both these elements. In this study, we delve into this overlooked realm by introducing a method to systematically learn multimodal alignment between audio and lyrics through contrastive learning. This not only recognizes and emphasizes the synergy between audio and lyrics but also paves the way for models to achieve deeper cross-modal coherence, thereby producing high-quality captions. We provide both theoretical and empirical results demonstrating the advantage of the proposed method, which achieves new state-of-the-art on two music captioning datasets.",
}
| Music captioning has gained significant attention in the wake of the rising prominence of streaming media platforms. Traditional approaches often prioritize either the audio or lyrics aspect of the music, inadvertently ignoring the intricate interplay between the two. However, a comprehensive understanding of music necessitates the integration of both these elements. In this study, we delve into this overlooked realm by introducing a method to systematically learn multimodal alignment between audio and lyrics through contrastive learning. This not only recognizes and emphasizes the synergy between audio and lyrics but also paves the way for models to achieve deeper cross-modal coherence, thereby producing high-quality captions. We provide both theoretical and empirical results demonstrating the advantage of the proposed method, which achieves new state-of-the-art on two music captioning datasets. | [
"He, Zihao",
"Hao, Weituo",
"Lu, Wei-Tsung",
"Chen, Changyou",
"Lerman, Kristina",
"Song, Xuchen"
] | ALCAP: Alignment-Augmented Music Captioner | emnlp-main.1028 | 2212.10901 | [
"https://github.com/zihaohe123/alcap"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1029.bib | https://aclanthology.org/2023.emnlp-main.1029/ | @inproceedings{zhao-etal-2023-transformers,
title = "Do Transformers Parse while Predicting the Masked Word?",
author = "Zhao, Haoyu and
Panigrahi, Abhishek and
Ge, Rong and
Arora, Sanjeev",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1029",
doi = "10.18653/v1/2023.emnlp-main.1029",
pages = "16513--16542",
abstract = "Pre-trained language models have been shown to encode linguistic structures like parse trees in their embeddings while being trained unsupervised. Some doubts have been raised whether the models are doing parsing or only some computation weakly correlated with it. Concretely: (a) Is it possible to explicitly describe transformers with realistic embedding dimensions, number of heads, etc. that are capable of doing parsing {---} or even approximate parsing? (b) Why do pre-trained models capture parsing structure? This paper takes a step toward answering these questions in the context of generative modeling with PCFGs. We show that masked language models like BERT or RoBERTa of moderate sizes can approximately execute the Inside-Outside algorithm for the English PCFG (Marcus et al., 1993). We also show that the Inside-Outside algorithm is optimal for masked language modeling loss on the PCFG-generated data. We conduct probing experiments on models pre-trained on PCFG-generated data to show that this not only allows recovery of approximate parse tree, but also recovers marginal span probabilities computed by the Inside-Outside algorithm, which suggests an implicit bias of masked language modeling towards this algorithm.",
}
| Pre-trained language models have been shown to encode linguistic structures like parse trees in their embeddings while being trained unsupervised. Some doubts have been raised whether the models are doing parsing or only some computation weakly correlated with it. Concretely: (a) Is it possible to explicitly describe transformers with realistic embedding dimensions, number of heads, etc. that are capable of doing parsing {---} or even approximate parsing? (b) Why do pre-trained models capture parsing structure? This paper takes a step toward answering these questions in the context of generative modeling with PCFGs. We show that masked language models like BERT or RoBERTa of moderate sizes can approximately execute the Inside-Outside algorithm for the English PCFG (Marcus et al., 1993). We also show that the Inside-Outside algorithm is optimal for masked language modeling loss on the PCFG-generated data. We conduct probing experiments on models pre-trained on PCFG-generated data to show that this not only allows recovery of approximate parse tree, but also recovers marginal span probabilities computed by the Inside-Outside algorithm, which suggests an implicit bias of masked language modeling towards this algorithm. | [
"Zhao, Haoyu",
"Panigrahi, Abhishek",
"Ge, Rong",
"Arora, Sanjeev"
] | Do Transformers Parse while Predicting the Masked Word? | emnlp-main.1029 | 2303.08117 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1030.bib | https://aclanthology.org/2023.emnlp-main.1030/ | @inproceedings{liu-etal-2023-composable,
title = "Composable Text Controls in Latent Space with {ODE}s",
author = "Liu, Guangyi and
Feng, Zeyu and
Gao, Yuan and
Yang, Zichao and
Liang, Xiaodan and
Bao, Junwei and
He, Xiaodong and
Cui, Shuguang and
Li, Zhen and
Hu, Zhiting",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1030",
doi = "10.18653/v1/2023.emnlp-main.1030",
pages = "16543--16570",
abstract = "Real-world text applications often involve composing a wide range of text control operations, such as editing the text w.r.t. an attribute, manipulating keywords and structure, and generating new text of desired properties. Prior work typically learns/finetunes a language model (LM) to perform individual or specific subsets of operations. Recent research has studied combining operations in a plug-and-play manner, often with costly search or optimization in the complex sequence space. This paper proposes a new efficient approach for composable text operations in the compact latent space of text. The low-dimensionality and differentiability of the text latent vector allow us to develop an efficient sampler based on ordinary differential equations (ODEs) given arbitrary plug-in operators (e.g., attribute classifiers). By connecting pretrained LMs (e.g., GPT2) to the latent space through efficient adaption, we then decode the sampled vectors into desired text sequences. The flexible approach permits diverse control operators (sentiment, tense, formality, keywords, etc.) acquired using any relevant data from different domains. Experiments show that composing those operators within our approach manages to generate or edit high-quality text, substantially improving over previous methods in terms of generation quality and efficiency.",
}
| Real-world text applications often involve composing a wide range of text control operations, such as editing the text w.r.t. an attribute, manipulating keywords and structure, and generating new text of desired properties. Prior work typically learns/finetunes a language model (LM) to perform individual or specific subsets of operations. Recent research has studied combining operations in a plug-and-play manner, often with costly search or optimization in the complex sequence space. This paper proposes a new efficient approach for composable text operations in the compact latent space of text. The low-dimensionality and differentiability of the text latent vector allow us to develop an efficient sampler based on ordinary differential equations (ODEs) given arbitrary plug-in operators (e.g., attribute classifiers). By connecting pretrained LMs (e.g., GPT2) to the latent space through efficient adaption, we then decode the sampled vectors into desired text sequences. The flexible approach permits diverse control operators (sentiment, tense, formality, keywords, etc.) acquired using any relevant data from different domains. Experiments show that composing those operators within our approach manages to generate or edit high-quality text, substantially improving over previous methods in terms of generation quality and efficiency. | [
"Liu, Guangyi",
"Feng, Zeyu",
"Gao, Yuan",
"Yang, Zichao",
"Liang, Xiaodan",
"Bao, Junwei",
"He, Xiaodong",
"Cui, Shuguang",
"Li, Zhen",
"Hu, Zhiting"
] | Composable Text Controls in Latent Space with ODEs | emnlp-main.1030 | 2208.00638 | [
"https://github.com/guangyliu/latentops"
] | https://huggingface.co/papers/2208.00638 | 1 | 0 | 0 | 10 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.1031.bib | https://aclanthology.org/2023.emnlp-main.1031/ | @inproceedings{lee-etal-2023-p5,
title = "P5: Plug-and-Play Persona Prompting for Personalized Response Selection",
author = "Lee, Joosung and
Oh, Minsik and
Lee, Donghun",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1031",
doi = "10.18653/v1/2023.emnlp-main.1031",
pages = "16571--16582",
abstract = "The use of persona-grounded retrieval-based chatbots is crucial for personalized conversations, but there are several challenges that need to be addressed. 1) In general, collecting persona-grounded corpus is very expensive. 2) The chatbot system does not always respond in consideration of persona at real applications. To address these challenges, we propose a plug-and-play persona prompting method. Our system can function as a standard open-domain chatbot if persona information is not available. We demonstrate that this approach performs well in the zero-shot setting, which reduces the dependence on persona-ground training data. This makes it easier to expand the system to other languages without the need to build a persona-grounded corpus. Additionally, our model can be fine-tuned for even better performance. In our experiments, the zero-shot model improved the standard model by 7.71 and 1.04 points in the original persona and revised persona, respectively. The fine-tuned model improved the previous state-of-the-art system by 1.95 and 3.39 points in the original persona and revised persona, respectively. To the best of our knowledge, this is the first attempt to solve the problem of personalized response selection using prompt sequences. Our code is available on github.",
}
| The use of persona-grounded retrieval-based chatbots is crucial for personalized conversations, but there are several challenges that need to be addressed. 1) In general, collecting persona-grounded corpus is very expensive. 2) The chatbot system does not always respond in consideration of persona at real applications. To address these challenges, we propose a plug-and-play persona prompting method. Our system can function as a standard open-domain chatbot if persona information is not available. We demonstrate that this approach performs well in the zero-shot setting, which reduces the dependence on persona-ground training data. This makes it easier to expand the system to other languages without the need to build a persona-grounded corpus. Additionally, our model can be fine-tuned for even better performance. In our experiments, the zero-shot model improved the standard model by 7.71 and 1.04 points in the original persona and revised persona, respectively. The fine-tuned model improved the previous state-of-the-art system by 1.95 and 3.39 points in the original persona and revised persona, respectively. To the best of our knowledge, this is the first attempt to solve the problem of personalized response selection using prompt sequences. Our code is available on github. | [
"Lee, Joosung",
"Oh, Minsik",
"Lee, Donghun"
] | P5: Plug-and-Play Persona Prompting for Personalized Response Selection | emnlp-main.1031 | 2310.06390 | [
"https://github.com/rungjoo/plug-and-play-prompt-persona"
] | https://huggingface.co/papers/2310.06390 | 0 | 0 | 0 | 3 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.1032.bib | https://aclanthology.org/2023.emnlp-main.1032/ | @inproceedings{dainese-etal-2023-reader,
title = "Reader: Model-based language-instructed reinforcement learning",
author = "Dainese, Nicola and
Marttinen, Pekka and
Ilin, Alexander",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1032",
doi = "10.18653/v1/2023.emnlp-main.1032",
pages = "16583--16599",
abstract = "We explore how we can build accurate world models, which are partially specified by language, and how we can plan with them in the face of novelty and uncertainty. We propose the first model-based reinforcement learning approach to tackle the environment Read To Fight Monsters (Zhong et al., 2019), a grounded policy learning problem. In RTFM an agent has to reason over a set of rules and a goal, both described in a language manual, and the observations, while taking into account the uncertainty arising from the stochasticity of the environment, in order to generalize successfully its policy to test episodes. We demonstrate the superior performance and sample efficiency of our model-based approach to the existing model-free SOTA agents in eight variants of RTFM. Furthermore, we show how the agent{'}s plans can be inspected, which represents progress towards more interpretable agents.",
}
| We explore how we can build accurate world models, which are partially specified by language, and how we can plan with them in the face of novelty and uncertainty. We propose the first model-based reinforcement learning approach to tackle the environment Read To Fight Monsters (Zhong et al., 2019), a grounded policy learning problem. In RTFM an agent has to reason over a set of rules and a goal, both described in a language manual, and the observations, while taking into account the uncertainty arising from the stochasticity of the environment, in order to generalize successfully its policy to test episodes. We demonstrate the superior performance and sample efficiency of our model-based approach to the existing model-free SOTA agents in eight variants of RTFM. Furthermore, we show how the agent{'}s plans can be inspected, which represents progress towards more interpretable agents. | [
"Dainese, Nicola",
"Marttinen, Pekka",
"Ilin, Alex",
"er"
] | Reader: Model-based language-instructed reinforcement learning | emnlp-main.1032 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1033.bib | https://aclanthology.org/2023.emnlp-main.1033/ | @inproceedings{fu-etal-2023-adapting,
title = "Adapting Offline Speech Translation Models for Streaming with Future-Aware Distillation and Inference",
author = "Fu, Biao and
Liao, Minpeng and
Fan, Kai and
Huang, Zhongqiang and
Chen, Boxing and
Chen, Yidong and
Shi, Xiaodong",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1033",
doi = "10.18653/v1/2023.emnlp-main.1033",
pages = "16600--16619",
abstract = "A popular approach to streaming speech translation is to employ a single offline model with a wait-k policy to support different latency requirements, which is simpler than training multiple online models with different latency constraints. However, there is a mismatch problem in using a model trained with complete utterances for streaming inference with partial input. We demonstrate that speech representations extracted at the end of a streaming input are significantly different from those extracted from a complete utterance. To address this issue, we propose a new approach called Future-Aware Streaming Translation (FAST) that adapts an offline ST model for streaming input. FAST includes a Future-Aware Inference (FAI) strategy that incorporates future context through a trainable masked embedding, and a Future-Aware Distillation (FAD) framework that transfers future context from an approximation of full speech to streaming input. Our experiments on the MuST-C EnDe, EnEs, and EnFr benchmarks show that FAST achieves better trade-offs between translation quality and latency than strong baselines. Extensive analyses suggest that our methods effectively alleviate the aforementioned mismatch problem between offline training and online inference.",
}
| A popular approach to streaming speech translation is to employ a single offline model with a wait-k policy to support different latency requirements, which is simpler than training multiple online models with different latency constraints. However, there is a mismatch problem in using a model trained with complete utterances for streaming inference with partial input. We demonstrate that speech representations extracted at the end of a streaming input are significantly different from those extracted from a complete utterance. To address this issue, we propose a new approach called Future-Aware Streaming Translation (FAST) that adapts an offline ST model for streaming input. FAST includes a Future-Aware Inference (FAI) strategy that incorporates future context through a trainable masked embedding, and a Future-Aware Distillation (FAD) framework that transfers future context from an approximation of full speech to streaming input. Our experiments on the MuST-C EnDe, EnEs, and EnFr benchmarks show that FAST achieves better trade-offs between translation quality and latency than strong baselines. Extensive analyses suggest that our methods effectively alleviate the aforementioned mismatch problem between offline training and online inference. | [
"Fu, Biao",
"Liao, Minpeng",
"Fan, Kai",
"Huang, Zhongqiang",
"Chen, Boxing",
"Chen, Yidong",
"Shi, Xiaodong"
] | Adapting Offline Speech Translation Models for Streaming with Future-Aware Distillation and Inference | emnlp-main.1033 | 2303.07914 | [
"https://github.com/biaofuxmu/fast"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1034.bib | https://aclanthology.org/2023.emnlp-main.1034/ | @inproceedings{yue-etal-2023-relation,
title = "Relation-aware Ensemble Learning for Knowledge Graph Embedding",
author = "Yue, Ling and
Zhang, Yongqi and
Yao, Quanming and
Li, Yong and
Wu, Xian and
Zhang, Ziheng and
Lin, Zhenxi and
Zheng, Yefeng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1034",
doi = "10.18653/v1/2023.emnlp-main.1034",
pages = "16620--16631",
abstract = "Knowledge graph (KG) embedding is a fundamental task in natural language processing, and various methods have been proposed to explore semantic patterns in distinctive ways. In this paper, we propose to learn an ensemble by leveraging existing methods in a relation-aware manner. However, exploring these semantics using relation-aware ensemble leads to a much larger search space than general ensemble methods. To address this issue, we propose a divide-search-combine algorithm RelEns-DSC that searches the relation-wise ensemble weights independently. This algorithm has the same computation cost as general ensemble methods but with much better performance. Experimental results on benchmark datasets demonstrate the effectiveness of the proposed method in efficiently searching relation-aware ensemble weights and achieving state-of-the-art embedding performance. The code is public at https://github.com/LARS-research/RelEns.",
}
| Knowledge graph (KG) embedding is a fundamental task in natural language processing, and various methods have been proposed to explore semantic patterns in distinctive ways. In this paper, we propose to learn an ensemble by leveraging existing methods in a relation-aware manner. However, exploring these semantics using relation-aware ensemble leads to a much larger search space than general ensemble methods. To address this issue, we propose a divide-search-combine algorithm RelEns-DSC that searches the relation-wise ensemble weights independently. This algorithm has the same computation cost as general ensemble methods but with much better performance. Experimental results on benchmark datasets demonstrate the effectiveness of the proposed method in efficiently searching relation-aware ensemble weights and achieving state-of-the-art embedding performance. The code is public at https://github.com/LARS-research/RelEns. | [
"Yue, Ling",
"Zhang, Yongqi",
"Yao, Quanming",
"Li, Yong",
"Wu, Xian",
"Zhang, Ziheng",
"Lin, Zhenxi",
"Zheng, Yefeng"
] | Relation-aware Ensemble Learning for Knowledge Graph Embedding | emnlp-main.1034 | 2310.08917 | [
"https://github.com/lars-research/relens"
] | https://huggingface.co/papers/2310.08917 | 1 | 2 | 0 | 8 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-main.1035.bib | https://aclanthology.org/2023.emnlp-main.1035/ | @inproceedings{maity-etal-2023-genex,
title = "{G}en{E}x: A Commonsense-aware Unified Generative Framework for Explainable Cyberbullying Detection",
author = "Maity, Krishanu and
Jain, Raghav and
Jha, Prince and
Saha, Sriparna and
Bhattacharyya, Pushpak",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1035",
doi = "10.18653/v1/2023.emnlp-main.1035",
pages = "16632--16645",
abstract = "With the rise of social media and online communication, the issue of cyberbullying has gained significant prominence. While extensive research is being conducted to develop more effective models for detecting cyberbullying in monolingual languages, a significant gap exists in understanding code-mixed languages and the need for explainability in this context. To address this gap, we have introduced a novel benchmark dataset named BullyExplain for explainable cyberbullying detection in code-mixed language. In this dataset, each post is meticulously annotated with four labels: bully, sentiment, target, and rationales, indicating the specific phrases responsible for identifying the post as a bully. Our current research presents an innovative unified generative framework, GenEx, which reimagines the multitask problem as a text-to-text generation task. Our proposed approach demonstrates its superiority across various evaluation metrics when applied to the BullyExplain dataset, surpassing other baseline models and current state-of-the-art approaches.",
}
| With the rise of social media and online communication, the issue of cyberbullying has gained significant prominence. While extensive research is being conducted to develop more effective models for detecting cyberbullying in monolingual languages, a significant gap exists in understanding code-mixed languages and the need for explainability in this context. To address this gap, we have introduced a novel benchmark dataset named BullyExplain for explainable cyberbullying detection in code-mixed language. In this dataset, each post is meticulously annotated with four labels: bully, sentiment, target, and rationales, indicating the specific phrases responsible for identifying the post as a bully. Our current research presents an innovative unified generative framework, GenEx, which reimagines the multitask problem as a text-to-text generation task. Our proposed approach demonstrates its superiority across various evaluation metrics when applied to the BullyExplain dataset, surpassing other baseline models and current state-of-the-art approaches. | [
"Maity, Krishanu",
"Jain, Raghav",
"Jha, Prince",
"Saha, Sriparna",
"Bhattacharyya, Pushpak"
] | GenEx: A Commonsense-aware Unified Generative Framework for Explainable Cyberbullying Detection | emnlp-main.1035 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1036.bib | https://aclanthology.org/2023.emnlp-main.1036/ | @inproceedings{wang-etal-2023-document-level,
title = "Document-Level Machine Translation with Large Language Models",
author = "Wang, Longyue and
Lyu, Chenyang and
Ji, Tianbo and
Zhang, Zhirui and
Yu, Dian and
Shi, Shuming and
Tu, Zhaopeng",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1036",
doi = "10.18653/v1/2023.emnlp-main.1036",
pages = "16646--16661",
abstract = "Large language models (LLMs) such as ChatGPT can produce coherent, cohesive, relevant, and fluent answers for various natural language processing (NLP) tasks. Taking document-level machine translation (MT) as a testbed, this paper provides an in-depth evaluation of LLMs{'} ability on discourse modeling. The study focuses on three aspects: 1) Effects of Context-Aware Prompts, where we investigate the impact of different prompts on document-level translation quality and discourse phenomena; 2) Comparison of Translation Models, where we compare the translation performance of ChatGPT with commercial MT systems and advanced document-level MT methods; 3) Analysis of Discourse Modelling Abilities, where we further probe discourse knowledge encoded in LLMs and shed light on impacts of training techniques on discourse modeling. By evaluating on a number of benchmarks, we surprisingly find that LLMs have demonstrated superior performance and show potential to become a new paradigm for document-level translation: 1) leveraging their powerful long-text modeling capabilities, GPT-3.5 and GPT-4 outperform commercial MT systems in terms of human evaluation; 2) GPT-4 demonstrates a stronger ability for probing linguistic knowledge than GPT-3.5. This work highlights the challenges and opportunities of LLMs for MT, which we hope can inspire the future design and evaluation of LLMs (We release our data and annotations at https://github.com/longyuewangdcu/Document-MT-LLM).",
}
| Large language models (LLMs) such as ChatGPT can produce coherent, cohesive, relevant, and fluent answers for various natural language processing (NLP) tasks. Taking document-level machine translation (MT) as a testbed, this paper provides an in-depth evaluation of LLMs{'} ability on discourse modeling. The study focuses on three aspects: 1) Effects of Context-Aware Prompts, where we investigate the impact of different prompts on document-level translation quality and discourse phenomena; 2) Comparison of Translation Models, where we compare the translation performance of ChatGPT with commercial MT systems and advanced document-level MT methods; 3) Analysis of Discourse Modelling Abilities, where we further probe discourse knowledge encoded in LLMs and shed light on impacts of training techniques on discourse modeling. By evaluating on a number of benchmarks, we surprisingly find that LLMs have demonstrated superior performance and show potential to become a new paradigm for document-level translation: 1) leveraging their powerful long-text modeling capabilities, GPT-3.5 and GPT-4 outperform commercial MT systems in terms of human evaluation; 2) GPT-4 demonstrates a stronger ability for probing linguistic knowledge than GPT-3.5. This work highlights the challenges and opportunities of LLMs for MT, which we hope can inspire the future design and evaluation of LLMs (We release our data and annotations at https://github.com/longyuewangdcu/Document-MT-LLM). | [
"Wang, Longyue",
"Lyu, Chenyang",
"Ji, Tianbo",
"Zhang, Zhirui",
"Yu, Dian",
"Shi, Shuming",
"Tu, Zhaopeng"
] | Document-Level Machine Translation with Large Language Models | emnlp-main.1036 | 2304.02210 | [
"https://github.com/longyuewangdcu/document-mt-llm"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1037.bib | https://aclanthology.org/2023.emnlp-main.1037/ | @inproceedings{joseph-etal-2023-multilingual,
title = "Multilingual Simplification of Medical Texts",
author = "Joseph, Sebastian and
Kazanas, Kathryn and
Reina, Keziah and
Ramanathan, Vishnesh and
Xu, Wei and
Wallace, Byron and
Li, Junyi Jessy",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1037",
doi = "10.18653/v1/2023.emnlp-main.1037",
pages = "16662--16692",
abstract = "Automated text simplification aims to produce simple versions of complex texts. This task is especially useful in the medical domain, where the latest medical findings are typically communicated via complex and technical articles. This creates barriers for laypeople seeking access to up-to-date medical findings, consequently impeding progress on health literacy. Most existing work on medical text simplification has focused on monolingual settings, with the result that such evidence would be available only in just one language (most often, English). This work addresses this limitation via multilingual simplification, i.e., directly simplifying complex texts into simplified texts in multiple languages. We introduce MultiCochrane, the first sentence-aligned multilingual text simplification dataset for the medical domain in four languages: English, Spanish, French, and Farsi. We evaluate fine-tuned and zero-shot models across these languages with extensive human assessments and analyses. Although models can generate viable simplified texts, we identify several outstanding challenges that this dataset might be used to address.",
}
| Automated text simplification aims to produce simple versions of complex texts. This task is especially useful in the medical domain, where the latest medical findings are typically communicated via complex and technical articles. This creates barriers for laypeople seeking access to up-to-date medical findings, consequently impeding progress on health literacy. Most existing work on medical text simplification has focused on monolingual settings, with the result that such evidence would be available only in just one language (most often, English). This work addresses this limitation via multilingual simplification, i.e., directly simplifying complex texts into simplified texts in multiple languages. We introduce MultiCochrane, the first sentence-aligned multilingual text simplification dataset for the medical domain in four languages: English, Spanish, French, and Farsi. We evaluate fine-tuned and zero-shot models across these languages with extensive human assessments and analyses. Although models can generate viable simplified texts, we identify several outstanding challenges that this dataset might be used to address. | [
"Joseph, Sebastian",
"Kazanas, Kathryn",
"Reina, Keziah",
"Ramanathan, Vishnesh",
"Xu, Wei",
"Wallace, Byron",
"Li, Junyi Jessy"
] | Multilingual Simplification of Medical Texts | emnlp-main.1037 | 2305.12532 | [
"https://github.com/sebajoe/multicochrane"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1038.bib | https://aclanthology.org/2023.emnlp-main.1038/ | @inproceedings{kumar-etal-2023-reviewers,
title = "When Reviewers Lock Horns: Finding Disagreements in Scientific Peer Reviews",
author = "Kumar, Sandeep and
Ghosal, Tirthankar and
Ekbal, Asif",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1038",
doi = "10.18653/v1/2023.emnlp-main.1038",
pages = "16693--16704",
abstract = "To this date, the efficacy of the scientific publishing enterprise fundamentally rests on the strength of the peer review process. The journal editor or the conference chair primarily relies on the expert reviewers{'} assessment, $\textit{identify points of agreement and disagreement}$ and try to reach a consensus to make a fair and informed decision on whether to accept or reject a paper. However, with the escalating number of submissions requiring review, especially in top-tier Artificial Intelligence (AI) conferences, the editor/chair, among many other works, invests a significant, sometimes stressful effort to mitigate reviewer disagreements. Here in this work, we introduce a novel task of automatically identifying contradictions among reviewers on a given article. To this end, we introduce $\textit{ContraSciView}$, a comprehensive review-pair contradiction dataset on around 8.5k papers (with around 28k review pairs containing nearly 50k review pair comments) from the open review-based ICLR and NeurIPS conferences. We further propose a baseline model that detects contradictory statements from the review pairs. To the best of our knowledge, we make the first attempt to identify disagreements among peer reviewers automatically. We make our dataset and code public for further investigations.",
}
| To this date, the efficacy of the scientific publishing enterprise fundamentally rests on the strength of the peer review process. The journal editor or the conference chair primarily relies on the expert reviewers{'} assessment, $\textit{identify points of agreement and disagreement}$ and try to reach a consensus to make a fair and informed decision on whether to accept or reject a paper. However, with the escalating number of submissions requiring review, especially in top-tier Artificial Intelligence (AI) conferences, the editor/chair, among many other works, invests a significant, sometimes stressful effort to mitigate reviewer disagreements. Here in this work, we introduce a novel task of automatically identifying contradictions among reviewers on a given article. To this end, we introduce $\textit{ContraSciView}$, a comprehensive review-pair contradiction dataset on around 8.5k papers (with around 28k review pairs containing nearly 50k review pair comments) from the open review-based ICLR and NeurIPS conferences. We further propose a baseline model that detects contradictory statements from the review pairs. To the best of our knowledge, we make the first attempt to identify disagreements among peer reviewers automatically. We make our dataset and code public for further investigations. | [
"Kumar, S",
"eep",
"Ghosal, Tirthankar",
"Ekbal, Asif"
] | When Reviewers Lock Horns: Finding Disagreements in Scientific Peer Reviews | emnlp-main.1038 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1039.bib | https://aclanthology.org/2023.emnlp-main.1039/ | @inproceedings{lin-etal-2023-argue,
title = "Argue with Me Tersely: Towards Sentence-Level Counter-Argument Generation",
author = "Lin, Jiayu and
Ye, Rong and
Han, Meng and
Zhang, Qi and
Lai, Ruofei and
Zhang, Xinyu and
Cao, Zhao and
Huang, Xuanjing and
Wei, Zhongyu",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1039",
doi = "10.18653/v1/2023.emnlp-main.1039",
pages = "16705--16720",
abstract = "Counter-argument generation{---}a captivating area in computational linguistics{---}seeks to craft statements that offer opposing views. While most research has ventured into paragraph-level generation, sentence-level counter-argument generation beckons with its unique constraints and brevity-focused challenges. Furthermore, the diverse nature of counter-arguments poses challenges for evaluating model performance solely based on n-gram-based metrics. In this paper, we present the ArgTersely benchmark for sentence-level counter-argument generation, drawing from a manually annotated dataset from the ChangeMyView debate forum. We also propose Arg-LlaMA for generating high-quality counter-argument. For better evaluation, we trained a BERT-based evaluator Arg-Judge with human preference data. We conducted comparative experiments involving various baselines such as LlaMA, Alpaca, GPT-3, and others. The results show the competitiveness of our proposed framework and evaluator in counter-argument generation tasks. Code and data are available at https://github.com/amazingljy1206/ArgTersely.",
}
| Counter-argument generation{---}a captivating area in computational linguistics{---}seeks to craft statements that offer opposing views. While most research has ventured into paragraph-level generation, sentence-level counter-argument generation beckons with its unique constraints and brevity-focused challenges. Furthermore, the diverse nature of counter-arguments poses challenges for evaluating model performance solely based on n-gram-based metrics. In this paper, we present the ArgTersely benchmark for sentence-level counter-argument generation, drawing from a manually annotated dataset from the ChangeMyView debate forum. We also propose Arg-LlaMA for generating high-quality counter-argument. For better evaluation, we trained a BERT-based evaluator Arg-Judge with human preference data. We conducted comparative experiments involving various baselines such as LlaMA, Alpaca, GPT-3, and others. The results show the competitiveness of our proposed framework and evaluator in counter-argument generation tasks. Code and data are available at https://github.com/amazingljy1206/ArgTersely. | [
"Lin, Jiayu",
"Ye, Rong",
"Han, Meng",
"Zhang, Qi",
"Lai, Ruofei",
"Zhang, Xinyu",
"Cao, Zhao",
"Huang, Xuanjing",
"Wei, Zhongyu"
] | Argue with Me Tersely: Towards Sentence-Level Counter-Argument Generation | emnlp-main.1039 | 2312.13608 | [
"https://github.com/amazingljy1206/argtersely"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1040.bib | https://aclanthology.org/2023.emnlp-main.1040/ | @inproceedings{billah-nagoudi-etal-2023-jasmine,
title = "{JASMINE}: {A}rabic {GPT} Models for Few-Shot Learning",
author = "Billah Nagoudi, El Moatez and
Abdul-Mageed, Muhammad and
Elmadany, AbdelRahim and
Inciarte, Alcides and
Islam Khondaker, Md Tawkat",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1040",
doi = "10.18653/v1/2023.emnlp-main.1040",
pages = "16721--16744",
abstract = "Scholarship on generative pretraining (GPT) remains acutely Anglocentric, leaving serious gaps in our understanding of the whole class of autoregressive models. For example, we have little knowledge about the potential of these models and their societal impacts in diverse linguistic and cultural settings. We alleviate this issue for Arabic, a wide collection of languages and dialectal varieties with more than 400 million population, by introducing JASMINE. JASMINE is a suite of powerful Arabic autoregressive Transformer language models ranging in size between 300 million-6.7 billion parameters pretrained on a large and diverse dataset ( 235 GB of text). We also carefully design and release a comprehensive benchmark for both automated and human evaluation of Arabic autoregressive models, with coverage of potential social biases, harms, and toxicity. Using our novel benchmark, we evaluate JASMINE extensively showing powerful performance intrinsically as well as in few-shot learning on a wide range of NLP tasks. We aim to responsibly release our models and evaluation benchmark with interested researchers, along with code for experimenting with them.",
}
| Scholarship on generative pretraining (GPT) remains acutely Anglocentric, leaving serious gaps in our understanding of the whole class of autoregressive models. For example, we have little knowledge about the potential of these models and their societal impacts in diverse linguistic and cultural settings. We alleviate this issue for Arabic, a wide collection of languages and dialectal varieties with more than 400 million population, by introducing JASMINE. JASMINE is a suite of powerful Arabic autoregressive Transformer language models ranging in size between 300 million-6.7 billion parameters pretrained on a large and diverse dataset ( 235 GB of text). We also carefully design and release a comprehensive benchmark for both automated and human evaluation of Arabic autoregressive models, with coverage of potential social biases, harms, and toxicity. Using our novel benchmark, we evaluate JASMINE extensively showing powerful performance intrinsically as well as in few-shot learning on a wide range of NLP tasks. We aim to responsibly release our models and evaluation benchmark with interested researchers, along with code for experimenting with them. | [
"Billah Nagoudi, El Moatez",
"Abdul-Mageed, Muhammad",
"Elmadany, AbdelRahim",
"Inciarte, Alcides",
"Islam Khondaker, Md Tawkat"
] | JASMINE: Arabic GPT Models for Few-Shot Learning | emnlp-main.1040 | 2212.10755 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1041.bib | https://aclanthology.org/2023.emnlp-main.1041/ | @inproceedings{jullien-etal-2023-nli4ct,
title = "{NLI}4{CT}: Multi-Evidence Natural Language Inference for Clinical Trial Reports",
author = "Jullien, Mael and
Valentino, Marco and
Frost, Hannah and
O{'}Regan, Paul and
Landers, D{\'o}nal and
Freitas, Andre",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1041",
doi = "10.18653/v1/2023.emnlp-main.1041",
pages = "16745--16764",
abstract = "How can we interpret and retrieve medical evidence to support clinical decisions? Clinical trial reports (CTR) amassed over the years contain indispensable information for the development of personalized medicine. However, it is practically infeasible to manually inspect over 400,000+ clinical trial reports in order to find the best evidence for experimental treatments. Natural Language Inference (NLI) offers a potential solution to this problem, by allowing the scalable computation of textual entailment. However, existing NLI models perform poorly on biomedical corpora, and previously published datasets fail to capture the full complexity of inference over CTRs. In this work, we present a novel resource to advance research on NLI for reasoning on CTRs. The resource includes two main tasks. Firstly, to determine the inference relation between a natural language statement, and a CTR. Secondly, to retrieve supporting facts to justify the predicted relation. We provide NLI4CT, a corpus of 2400 statements and CTRs, annotated for these tasks. Baselines on this corpus expose the limitations of existing NLI approaches, with 6 state-of-the-art NLI models achieving a maximum F1 score of 0.627. To the best of our knowledge, we are the first to design a task that covers the interpretation of full CTRs. To encourage further work on this challenging dataset, we make the corpus, competition leaderboard, and website, available on CodaLab, and code to replicate the baseline experiments on GitHub.",
}
| How can we interpret and retrieve medical evidence to support clinical decisions? Clinical trial reports (CTR) amassed over the years contain indispensable information for the development of personalized medicine. However, it is practically infeasible to manually inspect over 400,000+ clinical trial reports in order to find the best evidence for experimental treatments. Natural Language Inference (NLI) offers a potential solution to this problem, by allowing the scalable computation of textual entailment. However, existing NLI models perform poorly on biomedical corpora, and previously published datasets fail to capture the full complexity of inference over CTRs. In this work, we present a novel resource to advance research on NLI for reasoning on CTRs. The resource includes two main tasks. Firstly, to determine the inference relation between a natural language statement, and a CTR. Secondly, to retrieve supporting facts to justify the predicted relation. We provide NLI4CT, a corpus of 2400 statements and CTRs, annotated for these tasks. Baselines on this corpus expose the limitations of existing NLI approaches, with 6 state-of-the-art NLI models achieving a maximum F1 score of 0.627. To the best of our knowledge, we are the first to design a task that covers the interpretation of full CTRs. To encourage further work on this challenging dataset, we make the corpus, competition leaderboard, and website, available on CodaLab, and code to replicate the baseline experiments on GitHub. | [
"Jullien, Mael",
"Valentino, Marco",
"Frost, Hannah",
"O{'}Regan, Paul",
"L",
"ers, D{\\'o}nal",
"Freitas, Andre"
] | NLI4CT: Multi-Evidence Natural Language Inference for Clinical Trial Reports | emnlp-main.1041 | 2305.03598 | [
"https://github.com/ai-systems/nli4ct"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1042.bib | https://aclanthology.org/2023.emnlp-main.1042/ | @inproceedings{ridley-etal-2023-addressing,
title = "Addressing Linguistic Bias through a Contrastive Analysis of Academic Writing in the {NLP} Domain",
author = "Ridley, Robert and
Wu, Zhen and
Zhang, Jianbing and
Huang, Shujian and
Dai, Xinyu",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1042",
doi = "10.18653/v1/2023.emnlp-main.1042",
pages = "16765--16779",
abstract = "It has been well documented that a reviewer{'}s opinion of the nativeness of expression in an academic paper affects the likelihood of it being accepted for publication. Previous works have also shone a light on the stress and anxiety authors who are non-native English speakers experience when attempting to publish in international venues. We explore how this might be a concern in the field of Natural Language Processing (NLP) through conducting a comprehensive statistical analysis of NLP paper abstracts, identifying how authors of different linguistic backgrounds differ in the lexical, morphological, syntactic and cohesive aspects of their writing. Through our analysis, we identify that there are a number of characteristics that are highly variable across the different corpora examined in this paper. This indicates potential for the presence of linguistic bias. Therefore, we outline a set of recommendations to publishers of academic journals and conferences regarding their guidelines and resources for prospective authors in order to help enhance inclusivity and fairness.",
}
| It has been well documented that a reviewer{'}s opinion of the nativeness of expression in an academic paper affects the likelihood of it being accepted for publication. Previous works have also shone a light on the stress and anxiety authors who are non-native English speakers experience when attempting to publish in international venues. We explore how this might be a concern in the field of Natural Language Processing (NLP) through conducting a comprehensive statistical analysis of NLP paper abstracts, identifying how authors of different linguistic backgrounds differ in the lexical, morphological, syntactic and cohesive aspects of their writing. Through our analysis, we identify that there are a number of characteristics that are highly variable across the different corpora examined in this paper. This indicates potential for the presence of linguistic bias. Therefore, we outline a set of recommendations to publishers of academic journals and conferences regarding their guidelines and resources for prospective authors in order to help enhance inclusivity and fairness. | [
"Ridley, Robert",
"Wu, Zhen",
"Zhang, Jianbing",
"Huang, Shujian",
"Dai, Xinyu"
] | Addressing Linguistic Bias through a Contrastive Analysis of Academic Writing in the NLP Domain | emnlp-main.1042 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Oral |
||
https://aclanthology.org/2023.emnlp-main.1043.bib | https://aclanthology.org/2023.emnlp-main.1043/ | @inproceedings{zhang-etal-2023-robustgec,
title = "{R}obust{GEC}: Robust Grammatical Error Correction Against Subtle Context Perturbation",
author = "Zhang, Yue and
Cui, Leyang and
Zhao, Enbo and
Bi, Wei and
Shi, Shuming",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1043",
doi = "10.18653/v1/2023.emnlp-main.1043",
pages = "16780--16793",
abstract = "Grammatical Error Correction (GEC) systems play a vital role in assisting people with their daily writing tasks. However, users may sometimes come across a GEC system that initially performs well but fails to correct errors when the inputs are slightly modified. To ensure an ideal user experience, a reliable GEC system should have the ability to provide consistent and accurate suggestions when encountering irrelevant context perturbations, which we refer to as context robustness. In this paper, we introduce RobustGEC, a benchmark designed to evaluate the context robustness of GEC systems. RobustGEC comprises 5,000 GEC cases, each with one original error-correct sentence pair and five variants carefully devised by human annotators. Utilizing RobustGEC, we reveal that state-of-the-art GEC systems still lack sufficient robustness against context perturbations. Moreover, we propose a simple yet effective method for remitting this issue.",
}
| Grammatical Error Correction (GEC) systems play a vital role in assisting people with their daily writing tasks. However, users may sometimes come across a GEC system that initially performs well but fails to correct errors when the inputs are slightly modified. To ensure an ideal user experience, a reliable GEC system should have the ability to provide consistent and accurate suggestions when encountering irrelevant context perturbations, which we refer to as context robustness. In this paper, we introduce RobustGEC, a benchmark designed to evaluate the context robustness of GEC systems. RobustGEC comprises 5,000 GEC cases, each with one original error-correct sentence pair and five variants carefully devised by human annotators. Utilizing RobustGEC, we reveal that state-of-the-art GEC systems still lack sufficient robustness against context perturbations. Moreover, we propose a simple yet effective method for remitting this issue. | [
"Zhang, Yue",
"Cui, Leyang",
"Zhao, Enbo",
"Bi, Wei",
"Shi, Shuming"
] | RobustGEC: Robust Grammatical Error Correction Against Subtle Context Perturbation | emnlp-main.1043 | 2310.07299 | [
"https://github.com/hillzhang1999/robustgec"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1044.bib | https://aclanthology.org/2023.emnlp-main.1044/ | @inproceedings{salman-etal-2023-detecting,
title = "Detecting Propaganda Techniques in Code-Switched Social Media Text",
author = "Salman, Muhammad and
Hanif, Asif and
Shehata, Shady and
Nakov, Preslav",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1044",
doi = "10.18653/v1/2023.emnlp-main.1044",
pages = "16794--16812",
abstract = "Propaganda is a form of communication intended to influence the opinions and the mindset of the public to promote a particular agenda. With the rise of social media, propaganda has spread rapidly, leading to the need for automatic propaganda detection systems. Most work on propaganda detection has focused on high-resource languages, such as English, and little effort has been made to detect propaganda for low-resource languages. Yet, it is common to find a mix of multiple languages in social media communication, a phenomenon known as code-switching. Code-switching combines different languages within the same text, which poses a challenge for automatic systems. Considering this premise, we propose a novel task of detecting propaganda techniques in code-switched text. To support this task, we create a corpus of 1,030 texts code-switching between English and Roman Urdu, annotated with 20 propaganda techniques at fragment-level. We perform a number of experiments contrasting different experimental setups, and we find that it is important to model the multilinguality directly rather than using translation as well as to use the right fine-tuning strategy. We plan to publicly release our code and dataset.",
}
| Propaganda is a form of communication intended to influence the opinions and the mindset of the public to promote a particular agenda. With the rise of social media, propaganda has spread rapidly, leading to the need for automatic propaganda detection systems. Most work on propaganda detection has focused on high-resource languages, such as English, and little effort has been made to detect propaganda for low-resource languages. Yet, it is common to find a mix of multiple languages in social media communication, a phenomenon known as code-switching. Code-switching combines different languages within the same text, which poses a challenge for automatic systems. Considering this premise, we propose a novel task of detecting propaganda techniques in code-switched text. To support this task, we create a corpus of 1,030 texts code-switching between English and Roman Urdu, annotated with 20 propaganda techniques at fragment-level. We perform a number of experiments contrasting different experimental setups, and we find that it is important to model the multilinguality directly rather than using translation as well as to use the right fine-tuning strategy. We plan to publicly release our code and dataset. | [
"Salman, Muhammad",
"Hanif, Asif",
"Shehata, Shady",
"Nakov, Preslav"
] | Detecting Propaganda Techniques in Code-Switched Social Media Text | emnlp-main.1044 | 2305.14534 | [
"https://github.com/mbzuai-nlp/propaganda-codeswitched-text"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1045.bib | https://aclanthology.org/2023.emnlp-main.1045/ | @inproceedings{widiaputri-etal-2023-speech,
title = "Speech Recognition and Meaning Interpretation: Towards Disambiguation of Structurally Ambiguous Spoken Utterances in {I}ndonesian",
author = "Widiaputri, Ruhiyah and
Purwarianti, Ayu and
Lestari, Dessi and
Azizah, Kurniawati and
Tanaya, Dipta and
Sakti, Sakriani",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1045",
doi = "10.18653/v1/2023.emnlp-main.1045",
pages = "16813--16824",
abstract = "Despite being the world{'}s fourth-most populous country, the development of spoken language technologies in Indonesia still needs improvement. Most automatic speech recognition (ASR) systems that have been developed are still limited to transcribing the exact word-by-word, which, in many cases, consists of ambiguous sentences. In fact, speakers use prosodic characteristics of speech to convey different interpretations, which, unfortunately, these systems often ignore. In this study, we attempt to resolve structurally ambiguous utterances into unambiguous texts in Indonesian using prosodic information. To the best of our knowledge, this might be the first study to address this problem in the ASR context. Our contributions include (1) collecting the Indonesian speech corpus on structurally ambiguous sentences; (2) conducting a survey on how people disambiguate structurally ambiguous sentences presented in both text and speech forms; and (3) constructing an Indonesian ASR and meaning interpretation system by utilizing both cascade and direct approaches to map speech to text, along with two additional prosodic information signals (pause and pitch). The experimental results reveal that it is possible to disambiguate these utterances. In this study, the proposed cascade system, utilizing Mel-spectrograms concatenated with F0 and energy as input, achieved a disambiguation accuracy of 79.6{\%}, while the proposed direct system with the same input yielded an even more impressive disambiguation accuracy of 82.2{\%}.",
}
| Despite being the world{'}s fourth-most populous country, the development of spoken language technologies in Indonesia still needs improvement. Most automatic speech recognition (ASR) systems that have been developed are still limited to transcribing the exact word-by-word, which, in many cases, consists of ambiguous sentences. In fact, speakers use prosodic characteristics of speech to convey different interpretations, which, unfortunately, these systems often ignore. In this study, we attempt to resolve structurally ambiguous utterances into unambiguous texts in Indonesian using prosodic information. To the best of our knowledge, this might be the first study to address this problem in the ASR context. Our contributions include (1) collecting the Indonesian speech corpus on structurally ambiguous sentences; (2) conducting a survey on how people disambiguate structurally ambiguous sentences presented in both text and speech forms; and (3) constructing an Indonesian ASR and meaning interpretation system by utilizing both cascade and direct approaches to map speech to text, along with two additional prosodic information signals (pause and pitch). The experimental results reveal that it is possible to disambiguate these utterances. In this study, the proposed cascade system, utilizing Mel-spectrograms concatenated with F0 and energy as input, achieved a disambiguation accuracy of 79.6{\%}, while the proposed direct system with the same input yielded an even more impressive disambiguation accuracy of 82.2{\%}. | [
"Widiaputri, Ruhiyah",
"Purwarianti, Ayu",
"Lestari, Dessi",
"Azizah, Kurniawati",
"Tanaya, Dipta",
"Sakti, Sakriani"
] | Speech Recognition and Meaning Interpretation: Towards Disambiguation of Structurally Ambiguous Spoken Utterances in Indonesian | emnlp-main.1045 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-main.1046.bib | https://aclanthology.org/2023.emnlp-main.1046/ | @inproceedings{lee-etal-2023-target,
title = "Target-Agnostic Gender-Aware Contrastive Learning for Mitigating Bias in Multilingual Machine Translation",
author = "Lee, Minwoo and
Koh, Hyukhun and
Lee, Kang-il and
Zhang, Dongdong and
Kim, Minsung and
Jung, Kyomin",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1046",
doi = "10.18653/v1/2023.emnlp-main.1046",
pages = "16825--16839",
abstract = "Gender bias is a significant issue in machine translation, leading to ongoing research efforts in developing bias mitigation techniques. However, most works focus on debiasing bilingual models without much consideration for multilingual systems. In this paper, we specifically target the gender bias issue of multilingual machine translation models for unambiguous cases where there is a single correct translation, and propose a bias mitigation method based on a novel approach. Specifically, we propose Gender-Aware Contrastive Learning, GACL, which encodes contextual gender information into the representations of non-explicit gender words. Our method is target language-agnostic and is applicable to pre-trained multilingual machine translation models via fine-tuning. Through multilingual evaluation, we show that our approach improves gender accuracy by a wide margin without hampering translation performance. We also observe that incorporated gender information transfers and benefits other target languages regarding gender accuracy. Finally, we demonstrate that our method is applicable and beneficial to models of various sizes.",
}
| Gender bias is a significant issue in machine translation, leading to ongoing research efforts in developing bias mitigation techniques. However, most works focus on debiasing bilingual models without much consideration for multilingual systems. In this paper, we specifically target the gender bias issue of multilingual machine translation models for unambiguous cases where there is a single correct translation, and propose a bias mitigation method based on a novel approach. Specifically, we propose Gender-Aware Contrastive Learning, GACL, which encodes contextual gender information into the representations of non-explicit gender words. Our method is target language-agnostic and is applicable to pre-trained multilingual machine translation models via fine-tuning. Through multilingual evaluation, we show that our approach improves gender accuracy by a wide margin without hampering translation performance. We also observe that incorporated gender information transfers and benefits other target languages regarding gender accuracy. Finally, we demonstrate that our method is applicable and beneficial to models of various sizes. | [
"Lee, Minwoo",
"Koh, Hyukhun",
"Lee, Kang-il",
"Zhang, Dongdong",
"Kim, Minsung",
"Jung, Kyomin"
] | Target-Agnostic Gender-Aware Contrastive Learning for Mitigating Bias in Multilingual Machine Translation | emnlp-main.1046 | 2305.14016 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-main.1047.bib | https://aclanthology.org/2023.emnlp-main.1047/ | @inproceedings{pattichis-etal-2023-code,
title = "Code-Switching Metrics Using Intonation Units",
author = "Pattichis, Rebecca and
LaCasse, Dora and
Trawick, Sonya and
Cacoullos, Rena",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.1047",
doi = "10.18653/v1/2023.emnlp-main.1047",
pages = "16840--16849",
abstract = "Code-switching (CS) metrics in NLP that are based on word-level units are misaligned with true bilingual CS behavior. Crucially, CS is not equally likely between any two words, but follows syntactic and prosodic rules. We adapt two metrics, multilinguality and CS probability, and apply them to transcribed bilingual speech, for the first time putting forward Intonation Units (IUs) {--} prosodic speech segments {--} as basic tokens for NLP tasks. In addition, we calculate these two metrics separately for distinct mixing types: alternating-language multi-word strings and single-word incorporations from one language into another. Results indicate that individual differences according to the two CS metrics are independent. However, there is a shared tendency among bilinguals for multi-word CS to occur across, rather than within, IU boundaries. That is, bilinguals tend to prosodically separate their two languages. This constraint is blurred when metric calculations do not distinguish multi-word and single-word items. These results call for a reconsideration of units of analysis in future development of CS datasets for NLP tasks.",
}
| Code-switching (CS) metrics in NLP that are based on word-level units are misaligned with true bilingual CS behavior. Crucially, CS is not equally likely between any two words, but follows syntactic and prosodic rules. We adapt two metrics, multilinguality and CS probability, and apply them to transcribed bilingual speech, for the first time putting forward Intonation Units (IUs) {--} prosodic speech segments {--} as basic tokens for NLP tasks. In addition, we calculate these two metrics separately for distinct mixing types: alternating-language multi-word strings and single-word incorporations from one language into another. Results indicate that individual differences according to the two CS metrics are independent. However, there is a shared tendency among bilinguals for multi-word CS to occur across, rather than within, IU boundaries. That is, bilinguals tend to prosodically separate their two languages. This constraint is blurred when metric calculations do not distinguish multi-word and single-word items. These results call for a reconsideration of units of analysis in future development of CS datasets for NLP tasks. | [
"Pattichis, Rebecca",
"LaCasse, Dora",
"Trawick, Sonya",
"Cacoullos, Rena"
] | Code-Switching Metrics Using Intonation Units | emnlp-main.1047 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-tutorial.1.bib | https://aclanthology.org/2023.emnlp-tutorial.1/ | @inproceedings{joty-etal-2023-nlp,
title = "{NLP}+{V}is: {NLP} Meets Visualization",
author = "Joty, Shafiq and
Hoque, Enamul and
Vig, Jesse",
editor = "Zhang, Qi and
Sajjad, Hassan",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-tutorial.1",
doi = "10.18653/v1/2023.emnlp-tutorial.1",
pages = "1--6",
abstract = "Natural language and visualization (Vis) are two powerful modalities of human communication. The goal of this tutorial is to push forward the agenda of tightly integrating these two modalities. To this end, the tutorial will introduce NLP+Vis with a focus on two main threads of work: \textit{(i) NLP for Vis:} How to develop and adapt state-of-the-art NLP models for solving various visualization tasks? and \textit{(ii) Vis for NLP:} How to leverage visualization techniques to interpret and explain complex NLP models effectively? The tutorial will first motivate why NLP+Vis is an important area of research and provide an overview of research topics on combining NLP and Vis techniques. Then an overview of state-of-the-art deep learning models for NLP will be covered. Next, we will provide an overview of applying visualization techniques to help make NLP models more interpretable and explainable. In the final part, we will focus on various application tasks at the intersection of NLP and Vis. We will conclude with an interactive discussion of future challenges for NLP+Vis applications. The audience will include researchers interested in applying NLP for visualizations as well as others who focus more generally at the intersection of machine learning and visualization.",
}
| Natural language and visualization (Vis) are two powerful modalities of human communication. The goal of this tutorial is to push forward the agenda of tightly integrating these two modalities. To this end, the tutorial will introduce NLP+Vis with a focus on two main threads of work: \textit{(i) NLP for Vis:} How to develop and adapt state-of-the-art NLP models for solving various visualization tasks? and \textit{(ii) Vis for NLP:} How to leverage visualization techniques to interpret and explain complex NLP models effectively? The tutorial will first motivate why NLP+Vis is an important area of research and provide an overview of research topics on combining NLP and Vis techniques. Then an overview of state-of-the-art deep learning models for NLP will be covered. Next, we will provide an overview of applying visualization techniques to help make NLP models more interpretable and explainable. In the final part, we will focus on various application tasks at the intersection of NLP and Vis. We will conclude with an interactive discussion of future challenges for NLP+Vis applications. The audience will include researchers interested in applying NLP for visualizations as well as others who focus more generally at the intersection of machine learning and visualization. | [
"Joty, Shafiq",
"Hoque, Enamul",
"Vig, Jesse"
] | NLP+Vis: NLP Meets Visualization | emnlp-tutorial.1 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-tutorial.2.bib | https://aclanthology.org/2023.emnlp-tutorial.2/ | @inproceedings{xu-he-2023-security,
title = "Security Challenges in Natural Language Processing Models",
author = "Xu, Qiongkai and
He, Xuanli",
editor = "Zhang, Qi and
Sajjad, Hassan",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-tutorial.2",
doi = "10.18653/v1/2023.emnlp-tutorial.2",
pages = "7--12",
abstract = "Large-scale natural language processing models have been developed and integrated into numerous applications, given the advantage of their remarkable performance. Nonetheless, the security concerns associated with these models prevent the widespread adoption of these black-box machine learning models. In this tutorial, we will dive into three emerging security issues in NLP research, i.e., backdoor attacks, private data leakage, and imitation attacks. These threats will be introduced in accordance with their threatening usage scenarios, attack methodologies, and defense technologies.",
}
| Large-scale natural language processing models have been developed and integrated into numerous applications, given the advantage of their remarkable performance. Nonetheless, the security concerns associated with these models prevent the widespread adoption of these black-box machine learning models. In this tutorial, we will dive into three emerging security issues in NLP research, i.e., backdoor attacks, private data leakage, and imitation attacks. These threats will be introduced in accordance with their threatening usage scenarios, attack methodologies, and defense technologies. | [
"Xu, Qiongkai",
"He, Xuanli"
] | Security Challenges in Natural Language Processing Models | emnlp-tutorial.2 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-tutorial.3.bib | https://aclanthology.org/2023.emnlp-tutorial.3/ | @inproceedings{wu-etal-2023-designing,
title = "Designing, Evaluating, and Learning from Humans Interacting with {NLP} Models",
author = "Wu, Tongshuang and
Yang, Diyi and
Santy, Sebastin",
editor = "Zhang, Qi and
Sajjad, Hassan",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-tutorial.3",
doi = "10.18653/v1/2023.emnlp-tutorial.3",
pages = "13--18",
abstract = "The rapid advancement of natural language processing (NLP) research has led to various applications spanning a wide range of domains that require models to interact with humans {--} e.g., chatbots responding to human inquiries, machine translation systems assisting human translators, designers prompting Large Language Models for co-creation or prototyping AI-infused applications, etc. In these cases, humans interaction is key to the success of NLP applications; any potential misconceptions or differences might lead to error cascades at the subsequent stages. Such interaction involves a lot of design choices around models, e.g. the sensitivity of interfaces, the impact of design choice and evaluation questions, etc. This tutorial aims to provide a systematic and up-to-date overview of key considerations and effective approaches for studying human-NLP model interactions. Our tutorial will focus specifically on the scenario where end users {--} lay people and domain experts who have access to NLP models but are less familiar with NLP techniques {--} use or collaborate with deployed models. Throughout the tutorial, we will use five case studies (on classifier-assisted decision making, machine-aided translation, dialog systems, and prompting) to cover three major themes: (1) how to conduct human-in-the-loop usability evaluations to ensure that models are capable of interacting with humans; (2) how to design user interfaces (UIs) and interaction mechanisms that provide end users with easy access to NLP models; (3) how to learn and improve NLP models through the human interactions. We will use best practices from HCI to ground our discussion, and will highlight current challenges and future directions.",
}
| The rapid advancement of natural language processing (NLP) research has led to various applications spanning a wide range of domains that require models to interact with humans {--} e.g., chatbots responding to human inquiries, machine translation systems assisting human translators, designers prompting Large Language Models for co-creation or prototyping AI-infused applications, etc. In these cases, humans interaction is key to the success of NLP applications; any potential misconceptions or differences might lead to error cascades at the subsequent stages. Such interaction involves a lot of design choices around models, e.g. the sensitivity of interfaces, the impact of design choice and evaluation questions, etc. This tutorial aims to provide a systematic and up-to-date overview of key considerations and effective approaches for studying human-NLP model interactions. Our tutorial will focus specifically on the scenario where end users {--} lay people and domain experts who have access to NLP models but are less familiar with NLP techniques {--} use or collaborate with deployed models. Throughout the tutorial, we will use five case studies (on classifier-assisted decision making, machine-aided translation, dialog systems, and prompting) to cover three major themes: (1) how to conduct human-in-the-loop usability evaluations to ensure that models are capable of interacting with humans; (2) how to design user interfaces (UIs) and interaction mechanisms that provide end users with easy access to NLP models; (3) how to learn and improve NLP models through the human interactions. We will use best practices from HCI to ground our discussion, and will highlight current challenges and future directions. | [
"Wu, Tongshuang",
"Yang, Diyi",
"Santy, Sebastin"
] | Designing, Evaluating, and Learning from Humans Interacting with NLP Models | emnlp-tutorial.3 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-tutorial.4.bib | https://aclanthology.org/2023.emnlp-tutorial.4/ | @inproceedings{yin-etal-2023-llm,
title = "{LLM}-driven Instruction Following: Progresses and Concerns",
author = {Yin, Wenpeng and
Ye, Qinyuan and
Liu, Pengfei and
Ren, Xiang and
Sch{\"u}tze, Hinrich},
editor = "Zhang, Qi and
Sajjad, Hassan",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-tutorial.4",
doi = "10.18653/v1/2023.emnlp-tutorial.4",
pages = "19--25",
abstract = "The progress of natural language processing (NLP) is primarily driven by machine learning that optimizes a system on a large-scale set of task-specific labeled examples. This learning paradigm limits the ability of machines to have the same capabilities as humans in handling new tasks since humans can often solve unseen tasks with a couple of examples accompanied by task instructions. In addition, we may not have a chance to prepare task-specific examples of large-volume for new tasks because we cannot foresee what task needs to be addressed next and how complex to annotate for it. Therefore, task instructions act as a novel and promising resource for supervision. This tutorial targets researchers and practitioners who are interested in AI and ML technologies for NLP generalization in a low-shot scenario. In particular, we will present a diverse thread of instruction-driven NLP studies that try to answer the following questions: (i) What is task instruction? (ii) How is the process of creating datasets and evaluating systems conducted? (iii) How to encode task instructions? (iv) When and why do some instructions work better? (v) What concerns remain in LLM-driven instruction following? We will discuss several lines of frontier research that tackle those challenges and will conclude the tutorial by outlining directions for further investigation.",
}
| The progress of natural language processing (NLP) is primarily driven by machine learning that optimizes a system on a large-scale set of task-specific labeled examples. This learning paradigm limits the ability of machines to have the same capabilities as humans in handling new tasks since humans can often solve unseen tasks with a couple of examples accompanied by task instructions. In addition, we may not have a chance to prepare task-specific examples of large-volume for new tasks because we cannot foresee what task needs to be addressed next and how complex to annotate for it. Therefore, task instructions act as a novel and promising resource for supervision. This tutorial targets researchers and practitioners who are interested in AI and ML technologies for NLP generalization in a low-shot scenario. In particular, we will present a diverse thread of instruction-driven NLP studies that try to answer the following questions: (i) What is task instruction? (ii) How is the process of creating datasets and evaluating systems conducted? (iii) How to encode task instructions? (iv) When and why do some instructions work better? (v) What concerns remain in LLM-driven instruction following? We will discuss several lines of frontier research that tackle those challenges and will conclude the tutorial by outlining directions for further investigation. | [
"Yin, Wenpeng",
"Ye, Qinyuan",
"Liu, Pengfei",
"Ren, Xiang",
"Sch{\\\"u}tze, Hinrich"
] | LLM-driven Instruction Following: Progresses and Concerns | emnlp-tutorial.4 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-tutorial.5.bib | https://aclanthology.org/2023.emnlp-tutorial.5/ | @inproceedings{kumar-etal-2023-mitigating,
title = "Mitigating Societal Harms in Large Language Models",
author = "Kumar, Sachin and
Balachandran, Vidhisha and
Njoo, Lucille and
Anastasopoulos, Antonios and
Tsvetkov, Yulia",
editor = "Zhang, Qi and
Sajjad, Hassan",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-tutorial.5",
doi = "10.18653/v1/2023.emnlp-tutorial.5",
pages = "26--33",
abstract = "Numerous recent studies have highlighted societal harms that can be caused by language technologies deployed in the wild. While several surveys, tutorials, and workshops have discussed the risks of harms in specific contexts {--} e.g., detecting and mitigating gender bias in NLP models {--} no prior work has developed a unified typology of technical approaches for mitigating harms of language generation models. Our tutorial is based on a survey we recently wrote that proposes such a typology. We will provide an overview of potential social issues in language generation, including toxicity, social biases, misinformation, factual inconsistency, and privacy violations. Our primary focus will be on how to systematically identify risks, and how eliminate them at various stages of model development, from data collection, to model development, to inference/language generation. Through this tutorial, we aim to equip NLP researchers and engineers with a suite of practical tools for mitigating safety risks from pretrained language generation models.",
}
| Numerous recent studies have highlighted societal harms that can be caused by language technologies deployed in the wild. While several surveys, tutorials, and workshops have discussed the risks of harms in specific contexts {--} e.g., detecting and mitigating gender bias in NLP models {--} no prior work has developed a unified typology of technical approaches for mitigating harms of language generation models. Our tutorial is based on a survey we recently wrote that proposes such a typology. We will provide an overview of potential social issues in language generation, including toxicity, social biases, misinformation, factual inconsistency, and privacy violations. Our primary focus will be on how to systematically identify risks, and how eliminate them at various stages of model development, from data collection, to model development, to inference/language generation. Through this tutorial, we aim to equip NLP researchers and engineers with a suite of practical tools for mitigating safety risks from pretrained language generation models. | [
"Kumar, Sachin",
"Balach",
"ran, Vidhisha",
"Njoo, Lucille",
"Anastasopoulos, Antonios",
"Tsvetkov, Yulia"
] | Mitigating Societal Harms in Large Language Models | emnlp-tutorial.5 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-tutorial.6.bib | https://aclanthology.org/2023.emnlp-tutorial.6/ | @inproceedings{chakrabarty-etal-2023-creative,
title = "Creative Natural Language Generation",
author = "Chakrabarty, Tuhin and
Padmakumar, Vishakh and
He, He and
Peng, Nanyun",
editor = "Zhang, Qi and
Sajjad, Hassan",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Tutorial Abstracts",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-tutorial.6",
doi = "10.18653/v1/2023.emnlp-tutorial.6",
pages = "34--40",
abstract = "Large language models such as GPT-3, GPT4, Claude etc., have advanced the state of the art in several natural language generation tasks such as text summarization and machine translation. However when it comes to open-ended tasks with a focus on creativity such as generating stories, poetry, or various forms of figurative language, these state-of-the-art language models are often found to be inadequate. This tutorial aims to bring awareness of the important and emerging research area of open-domain creative generation, with a focus on language generation while also touching on multi-modal generation (e.g., image captioning, visual metaphors). It targets natural language processing (NLP) and artificial intelligence (AI) researchers as well as creative writing practitioners who are interested in building systems that are capable of emulating as well as augmenting human creativity. In particular, we will review recent studies on creative language generation both at the sentence level as well as longer forms of text. We will provide the audiences with a holistic view of 1) the importance and challenges of building creative language generation systems; 2) how we incorporate content planning, domain knowledge and creativity specific heuristics for different forms of creative language generation such as story, poetry, humor, metaphors etc 3) how can we build better evaluation methods for creative text generation? In particular, how could the recent advancement of AI shape the future workforce for creativity? We will conclude the tutorial by outlining future research directions in this area.",
}
| Large language models such as GPT-3, GPT4, Claude etc., have advanced the state of the art in several natural language generation tasks such as text summarization and machine translation. However when it comes to open-ended tasks with a focus on creativity such as generating stories, poetry, or various forms of figurative language, these state-of-the-art language models are often found to be inadequate. This tutorial aims to bring awareness of the important and emerging research area of open-domain creative generation, with a focus on language generation while also touching on multi-modal generation (e.g., image captioning, visual metaphors). It targets natural language processing (NLP) and artificial intelligence (AI) researchers as well as creative writing practitioners who are interested in building systems that are capable of emulating as well as augmenting human creativity. In particular, we will review recent studies on creative language generation both at the sentence level as well as longer forms of text. We will provide the audiences with a holistic view of 1) the importance and challenges of building creative language generation systems; 2) how we incorporate content planning, domain knowledge and creativity specific heuristics for different forms of creative language generation such as story, poetry, humor, metaphors etc 3) how can we build better evaluation methods for creative text generation? In particular, how could the recent advancement of AI shape the future workforce for creativity? We will conclude the tutorial by outlining future research directions in this area. | [
"Chakrabarty, Tuhin",
"Padmakumar, Vishakh",
"He, He",
"Peng, Nanyun"
] | Creative Natural Language Generation | emnlp-tutorial.6 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.1.bib | https://aclanthology.org/2023.emnlp-demo.1/ | @inproceedings{golde-etal-2023-fabricator,
title = "Fabricator: An Open Source Toolkit for Generating Labeled Training Data with Teacher {LLM}s",
author = "Golde, Jonas and
Haller, Patrick and
Hamborg, Felix and
Risch, Julian and
Akbik, Alan",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.1",
doi = "10.18653/v1/2023.emnlp-demo.1",
pages = "1--11",
abstract = "Most NLP tasks are modeled as supervised learning and thus require labeled training data to train effective models. However, manually producing such data at sufficient quality and quantity is known to be costly and time-intensive. Current research addresses this bottleneck by exploring a novel paradigm called zero-shot learning via dataset generation. Here, a powerful LLM is prompted with a task description to generate labeled data that can be used to train a downstream NLP model. For instance, an LLM might be prompted to {``}generate 500 movie reviews with positive overall sentiment, and another 500 with negative sentiment.{''} The generated data could then be used to train a binary sentiment classifier, effectively leveraging an LLM as a teacher to a smaller student model. With this demo, we introduce Fabricator, an open-source Python toolkit for dataset generation. Fabricator implements common dataset generation workflows, supports a wide range of downstream NLP tasks (such as text classification, question answering, and entity recognition), and is integrated with well-known libraries to facilitate quick experimentation. With Fabricator, we aim to support researchers in conducting reproducible dataset generation experiments using LLMs and help practitioners apply this approach to train models for downstream tasks.",
}
| Most NLP tasks are modeled as supervised learning and thus require labeled training data to train effective models. However, manually producing such data at sufficient quality and quantity is known to be costly and time-intensive. Current research addresses this bottleneck by exploring a novel paradigm called zero-shot learning via dataset generation. Here, a powerful LLM is prompted with a task description to generate labeled data that can be used to train a downstream NLP model. For instance, an LLM might be prompted to {``}generate 500 movie reviews with positive overall sentiment, and another 500 with negative sentiment.{''} The generated data could then be used to train a binary sentiment classifier, effectively leveraging an LLM as a teacher to a smaller student model. With this demo, we introduce Fabricator, an open-source Python toolkit for dataset generation. Fabricator implements common dataset generation workflows, supports a wide range of downstream NLP tasks (such as text classification, question answering, and entity recognition), and is integrated with well-known libraries to facilitate quick experimentation. With Fabricator, we aim to support researchers in conducting reproducible dataset generation experiments using LLMs and help practitioners apply this approach to train models for downstream tasks. | [
"Golde, Jonas",
"Haller, Patrick",
"Hamborg, Felix",
"Risch, Julian",
"Akbik, Alan"
] | Fabricator: An Open Source Toolkit for Generating Labeled Training Data with Teacher LLMs | emnlp-demo.1 | 2309.09582 | [
"https://github.com/flairnlp/fabricator"
] | https://huggingface.co/papers/2309.09582 | 3 | 4 | 0 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.2.bib | https://aclanthology.org/2023.emnlp-demo.2/ | @inproceedings{huber-etal-2023-end,
title = "End-to-End Evaluation for Low-Latency Simultaneous Speech Translation",
author = "Huber, Christian and
Dinh, Tu Anh and
Mullov, Carlos and
Pham, Ngoc-Quan and
Nguyen, Thai Binh and
Retkowski, Fabian and
Constantin, Stefan and
Ugan, Enes and
Liu, Danni and
Li, Zhaolin and
Koneru, Sai and
Niehues, Jan and
Waibel, Alexander",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.2",
doi = "10.18653/v1/2023.emnlp-demo.2",
pages = "12--20",
abstract = "The challenge of low-latency speech translation has recently draw significant interest in the research community as shown by several publications and shared tasks. Therefore, it is essential to evaluate these different approaches in realistic scenarios. However, currently only specific aspects of the systems are evaluated and often it is not possible to compare different approaches. In this work, we propose the first framework to perform and evaluate the various aspects of low-latency speech translation under realistic conditions. The evaluation is carried out in an end-to-end fashion. This includes the segmentation of the audio as well as the run-time of the different components. Secondly, we compare different approaches to low-latency speech translation using this framework. We evaluate models with the option to revise the output as well as methods with fixed output. Furthermore, we directly compare state-of-the-art cascaded as well as end-to-end systems. Finally, the framework allows to automatically evaluate the translation quality as well as latency and also provides a web interface to show the low-latency model outputs to the user.",
}
| The challenge of low-latency speech translation has recently draw significant interest in the research community as shown by several publications and shared tasks. Therefore, it is essential to evaluate these different approaches in realistic scenarios. However, currently only specific aspects of the systems are evaluated and often it is not possible to compare different approaches. In this work, we propose the first framework to perform and evaluate the various aspects of low-latency speech translation under realistic conditions. The evaluation is carried out in an end-to-end fashion. This includes the segmentation of the audio as well as the run-time of the different components. Secondly, we compare different approaches to low-latency speech translation using this framework. We evaluate models with the option to revise the output as well as methods with fixed output. Furthermore, we directly compare state-of-the-art cascaded as well as end-to-end systems. Finally, the framework allows to automatically evaluate the translation quality as well as latency and also provides a web interface to show the low-latency model outputs to the user. | [
"Huber, Christian",
"Dinh, Tu Anh",
"Mullov, Carlos",
"Pham, Ngoc-Quan",
"Nguyen, Thai Binh",
"Retkowski, Fabian",
"Constantin, Stefan",
"Ugan, Enes",
"Liu, Danni",
"Li, Zhaolin",
"Koneru, Sai",
"Niehues, Jan",
"Waibel, Alex",
"er"
] | End-to-End Evaluation for Low-Latency Simultaneous Speech Translation | emnlp-demo.2 | 2308.03415 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.3.bib | https://aclanthology.org/2023.emnlp-demo.3/ | @inproceedings{ni-etal-2023-chatreport,
title = "{CHATREPORT}: Democratizing Sustainability Disclosure Analysis through {LLM}-based Tools",
author = "Ni, Jingwei and
Bingler, Julia and
Colesanti-Senni, Chiara and
Kraus, Mathias and
Gostlow, Glen and
Schimanski, Tobias and
Stammbach, Dominik and
Ashraf Vaghefi, Saeid and
Wang, Qian and
Webersinke, Nicolas and
Wekhof, Tobias and
Yu, Tingyu and
Leippold, Markus",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.3",
doi = "10.18653/v1/2023.emnlp-demo.3",
pages = "21--51",
abstract = "In the face of climate change, are companies really taking substantial steps toward more sustainable operations? A comprehensive answer lies in the dense, information-rich landscape of corporate sustainability reports. However, the sheer volume and complexity of these reports make human analysis very costly. Therefore, only a few entities worldwide have the resources to analyze these reports at scale, which leads to a lack of transparency in sustainability reporting. Empowering stakeholders with LLM-based automatic analysis tools can be a promising way to democratize sustainability report analysis. However, developing such tools is challenging due to (1) the hallucination of LLMs and (2) the inefficiency of bringing domain experts into the AI development loop. In this paper, we introduce ChatReport, a novel LLM-based system to automate the analysis of corporate sustainability reports, addressing existing challenges by (1) making the answers traceable to reduce the harm of hallucination and (2) actively involving domain experts in the development loop. We make our methodology, annotated datasets, and generated analyses of 1015 reports publicly available. Video Introduction: \url{https://www.youtube.com/watch?v=Q5AzaKzPE4M} Github: \url{https://github.com/EdisonNi-hku/chatreport} Live web app: reports.chatclimate.ai",
}
| In the face of climate change, are companies really taking substantial steps toward more sustainable operations? A comprehensive answer lies in the dense, information-rich landscape of corporate sustainability reports. However, the sheer volume and complexity of these reports make human analysis very costly. Therefore, only a few entities worldwide have the resources to analyze these reports at scale, which leads to a lack of transparency in sustainability reporting. Empowering stakeholders with LLM-based automatic analysis tools can be a promising way to democratize sustainability report analysis. However, developing such tools is challenging due to (1) the hallucination of LLMs and (2) the inefficiency of bringing domain experts into the AI development loop. In this paper, we introduce ChatReport, a novel LLM-based system to automate the analysis of corporate sustainability reports, addressing existing challenges by (1) making the answers traceable to reduce the harm of hallucination and (2) actively involving domain experts in the development loop. We make our methodology, annotated datasets, and generated analyses of 1015 reports publicly available. Video Introduction: \url{https://www.youtube.com/watch?v=Q5AzaKzPE4M} Github: \url{https://github.com/EdisonNi-hku/chatreport} Live web app: reports.chatclimate.ai | [
"Ni, Jingwei",
"Bingler, Julia",
"Colesanti-Senni, Chiara",
"Kraus, Mathias",
"Gostlow, Glen",
"Schimanski, Tobias",
"Stammbach, Dominik",
"Ashraf Vaghefi, Saeid",
"Wang, Qian",
"Webersinke, Nicolas",
"Wekhof, Tobias",
"Yu, Tingyu",
"Leippold, Markus"
] | CHATREPORT: Democratizing Sustainability Disclosure Analysis through LLM-based Tools | emnlp-demo.3 | 2307.15770 | [
"https://github.com/edisonni-hku/chatreport"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.4.bib | https://aclanthology.org/2023.emnlp-demo.4/ | @inproceedings{hoshi-etal-2023-ralle,
title = "{R}a{LL}e: A Framework for Developing and Evaluating Retrieval-Augmented Large Language Models",
author = "Hoshi, Yasuto and
Miyashita, Daisuke and
Ng, Youyang and
Tatsuno, Kento and
Morioka, Yasuhiro and
Torii, Osamu and
Deguchi, Jun",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.4",
doi = "10.18653/v1/2023.emnlp-demo.4",
pages = "52--69",
abstract = "Retrieval-augmented large language models (R-LLMs) combine pre-trained large language models (LLMs) with information retrieval systems to improve the accuracy of factual question-answering. However, current libraries for building R-LLMs provide high-level abstractions without sufficient transparency for evaluating and optimizing prompts within specific inference processes such as retrieval and generation. To address this gap, we present RaLLe, an open-source framework designed to facilitate the development, evaluation, and optimization of R-LLMs for knowledge-intensive tasks. With RaLLe, developers can easily develop and evaluate R-LLMs, improving hand-crafted prompts, assessing individual inference processes, and objectively measuring overall system performance quantitatively. By leveraging these features, developers can enhance the performance and accuracy of their R-LLMs in knowledge-intensive generation tasks.",
}
| Retrieval-augmented large language models (R-LLMs) combine pre-trained large language models (LLMs) with information retrieval systems to improve the accuracy of factual question-answering. However, current libraries for building R-LLMs provide high-level abstractions without sufficient transparency for evaluating and optimizing prompts within specific inference processes such as retrieval and generation. To address this gap, we present RaLLe, an open-source framework designed to facilitate the development, evaluation, and optimization of R-LLMs for knowledge-intensive tasks. With RaLLe, developers can easily develop and evaluate R-LLMs, improving hand-crafted prompts, assessing individual inference processes, and objectively measuring overall system performance quantitatively. By leveraging these features, developers can enhance the performance and accuracy of their R-LLMs in knowledge-intensive generation tasks. | [
"Hoshi, Yasuto",
"Miyashita, Daisuke",
"Ng, Youyang",
"Tatsuno, Kento",
"Morioka, Yasuhiro",
"Torii, Osamu",
"Deguchi, Jun"
] | RaLLe: A Framework for Developing and Evaluating Retrieval-Augmented Large Language Models | emnlp-demo.4 | 2308.10633 | [
"https://github.com/yhoshi3/ralle"
] | https://huggingface.co/papers/2308.10633 | 0 | 1 | 0 | 7 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.5.bib | https://aclanthology.org/2023.emnlp-demo.5/ | @inproceedings{voigt-etal-2023-vist5,
title = "{VIST}5: An Adaptive, Retrieval-Augmented Language Model for Visualization-oriented Dialog",
author = "Voigt, Henrik and
Carvalhais, Nuno and
Meuschke, Monique and
Reichstein, Markus and
Zarrie, Sina and
Lawonn, Kai",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.5",
doi = "10.18653/v1/2023.emnlp-demo.5",
pages = "70--81",
abstract = "The advent of large language models has brought about new ways of interacting with data intuitively via natural language. In recent years, a variety of visualization systems have explored the use of natural language to create and modify visualizations through visualization-oriented dialog. However, the majority of these systems rely on tailored dialog agents to analyze domain-specific data and operate domain-specific visualization tools and libraries. This is a major challenge when trying to transfer functionalities between dialog interfaces of different visualization applications. To address this issue, we propose VIST5, a visualization-oriented dialog system that focuses on easy adaptability to an application domain as well as easy transferability of language-controllable visualization library functions between applications. Its architecture is based on a retrieval-augmented T5 language model that leverages few-shot learning capabilities to enable a rapid adaptation of the system.",
}
| The advent of large language models has brought about new ways of interacting with data intuitively via natural language. In recent years, a variety of visualization systems have explored the use of natural language to create and modify visualizations through visualization-oriented dialog. However, the majority of these systems rely on tailored dialog agents to analyze domain-specific data and operate domain-specific visualization tools and libraries. This is a major challenge when trying to transfer functionalities between dialog interfaces of different visualization applications. To address this issue, we propose VIST5, a visualization-oriented dialog system that focuses on easy adaptability to an application domain as well as easy transferability of language-controllable visualization library functions between applications. Its architecture is based on a retrieval-augmented T5 language model that leverages few-shot learning capabilities to enable a rapid adaptation of the system. | [
"Voigt, Henrik",
"Carvalhais, Nuno",
"Meuschke, Monique",
"Reichstein, Markus",
"Zarrie, Sina",
"Lawonn, Kai"
] | VIST5: An Adaptive, Retrieval-Augmented Language Model for Visualization-oriented Dialog | emnlp-demo.5 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.6.bib | https://aclanthology.org/2023.emnlp-demo.6/ | @inproceedings{candel-etal-2023-h2o,
title = "{H}2{O} Open Ecosystem for State-of-the-art Large Language Models",
author = "Candel, Arno and
McKinney, Jon and
Singer, Philipp and
Pfeiffer, Pascal and
Jeblick, Maximilian and
Lee, Chun Ming and
Conde, Marcos",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.6",
doi = "10.18653/v1/2023.emnlp-demo.6",
pages = "82--89",
abstract = "Large Language Models (LLMs) represent a revolution in AI. However, they also pose many significant risks, such as the presence of biased, private, copyrighted or harmful text. For this reason we need open, transparent and safe solutions. We introduce a complete open-source ecosystem for developing and testing LLMs. The goal of this project is to boost open alternatives to closed-source approaches. We release h2oGPT, a family of fine-tuned LLMs from 7 to 70 Billion parameters. We also introduce H2O LLM Studio, a framework and no-code GUI designed for efficient fine-tuning, evaluation, and deployment of LLMs using the most recent state-of-the-art techniques. Our code and models are licensed under fully permissive Apache 2.0 licenses. We believe open-source language models help to boost AI development and make it more accessible and trustworthy. Our demo is available at: https://gpt.h2o.ai/",
}
| Large Language Models (LLMs) represent a revolution in AI. However, they also pose many significant risks, such as the presence of biased, private, copyrighted or harmful text. For this reason we need open, transparent and safe solutions. We introduce a complete open-source ecosystem for developing and testing LLMs. The goal of this project is to boost open alternatives to closed-source approaches. We release h2oGPT, a family of fine-tuned LLMs from 7 to 70 Billion parameters. We also introduce H2O LLM Studio, a framework and no-code GUI designed for efficient fine-tuning, evaluation, and deployment of LLMs using the most recent state-of-the-art techniques. Our code and models are licensed under fully permissive Apache 2.0 licenses. We believe open-source language models help to boost AI development and make it more accessible and trustworthy. Our demo is available at: https://gpt.h2o.ai/ | [
"C",
"el, Arno",
"McKinney, Jon",
"Singer, Philipp",
"Pfeiffer, Pascal",
"Jeblick, Maximilian",
"Lee, Chun Ming",
"Conde, Marcos"
] | H2O Open Ecosystem for State-of-the-art Large Language Models | emnlp-demo.6 | 2310.13012 | [
"https://github.com/h2oai/h2ogpt"
] | https://huggingface.co/papers/2310.13012 | 6 | 7 | 2 | 7 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.7.bib | https://aclanthology.org/2023.emnlp-demo.7/ | @inproceedings{vu-etal-2023-koala,
title = "Koala: An Index for Quantifying Overlaps with Pre-training Corpora",
author = "Vu, Thuy-Trang and
He, Xuanli and
Haffari, Gholamreza and
Shareghi, Ehsan",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.7",
doi = "10.18653/v1/2023.emnlp-demo.7",
pages = "90--98",
abstract = "In very recent years more attention has been placed on probing the role of pre-training data in Large Language Models (LLMs) downstream behaviour. Despite the importance, there is no public tool that supports such analysis of pre-training corpora at large scale. To help research in this space, we launch Koala, a searchable index over large pre-training corpora using lossless compressed suffix arrays with highly efficient compression rate and search support. In its first release we index the public proportion of OPT 175B, GPT-3, GPT-Neo, GPT-Neo, LLaMA, BERT, ELECTRA, RoBERTA, XLNet pre-training corpora. Koala provides a framework to do forensic analysis on the current and future benchmarks as well as to assess the degree of memorization in the output from the LLMs. Koala is available for public use at https://koala-index.erc.monash.edu/.",
}
| In very recent years more attention has been placed on probing the role of pre-training data in Large Language Models (LLMs) downstream behaviour. Despite the importance, there is no public tool that supports such analysis of pre-training corpora at large scale. To help research in this space, we launch Koala, a searchable index over large pre-training corpora using lossless compressed suffix arrays with highly efficient compression rate and search support. In its first release we index the public proportion of OPT 175B, GPT-3, GPT-Neo, GPT-Neo, LLaMA, BERT, ELECTRA, RoBERTA, XLNet pre-training corpora. Koala provides a framework to do forensic analysis on the current and future benchmarks as well as to assess the degree of memorization in the output from the LLMs. Koala is available for public use at https://koala-index.erc.monash.edu/. | [
"Vu, Thuy-Trang",
"He, Xuanli",
"Haffari, Gholamreza",
"Shareghi, Ehsan"
] | Koala: An Index for Quantifying Overlaps with Pre-training Corpora | emnlp-demo.7 | 2303.14770 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.8.bib | https://aclanthology.org/2023.emnlp-demo.8/ | @inproceedings{chang-etal-2023-sudowoodo,
title = "Sudowoodo: A {C}hinese Lyric Imitation System with Source Lyrics",
author = "Chang, Yongzhu and
Zhang, Rongsheng and
Jiang, Lin and
Chen, Qihang and
Zhang, Le and
Pu, Jiashu",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.8",
doi = "10.18653/v1/2023.emnlp-demo.8",
pages = "99--105",
abstract = "Lyrics generation is a well-known application in natural language generation research, with several previous studies focusing on generating accurate lyrics using precise control such as keywords, rhymes, etc. However, lyrics imitation, which involves writing new lyrics by imitating the style and content of the source lyrics, remains a challenging task due to the lack of a parallel corpus. In this paper, we introduce Sudowoodo, a Chinese lyrics imitation system that can generate new lyrics based on the text of source lyrics. To address the issue of lacking a parallel training corpus for lyrics imitation, we propose a novel framework to construct a parallel corpus based on a keyword-based lyrics model from source lyrics. Then the pairs \textit{(new lyrics, source lyrics)} are used to train the lyrics imitation model. During the inference process, we utilize a post-processing module to filter and rank the generated lyrics, selecting the highest-quality ones. We incorporated audio information and aligned the lyrics with the audio to form the songs as a bonus. The human evaluation results show that our framework can perform better lyric imitation. Meanwhile, the \textit{Sudowoodo} system and demo video of the system is available at Sudowoodo and \url{https://youtu.be/u5BBT\_j1L5M}",
}
| Lyrics generation is a well-known application in natural language generation research, with several previous studies focusing on generating accurate lyrics using precise control such as keywords, rhymes, etc. However, lyrics imitation, which involves writing new lyrics by imitating the style and content of the source lyrics, remains a challenging task due to the lack of a parallel corpus. In this paper, we introduce Sudowoodo, a Chinese lyrics imitation system that can generate new lyrics based on the text of source lyrics. To address the issue of lacking a parallel training corpus for lyrics imitation, we propose a novel framework to construct a parallel corpus based on a keyword-based lyrics model from source lyrics. Then the pairs \textit{(new lyrics, source lyrics)} are used to train the lyrics imitation model. During the inference process, we utilize a post-processing module to filter and rank the generated lyrics, selecting the highest-quality ones. We incorporated audio information and aligned the lyrics with the audio to form the songs as a bonus. The human evaluation results show that our framework can perform better lyric imitation. Meanwhile, the \textit{Sudowoodo} system and demo video of the system is available at Sudowoodo and \url{https://youtu.be/u5BBT\_j1L5M} | [
"Chang, Yongzhu",
"Zhang, Rongsheng",
"Jiang, Lin",
"Chen, Qihang",
"Zhang, Le",
"Pu, Jiashu"
] | Sudowoodo: A Chinese Lyric Imitation System with Source Lyrics | emnlp-demo.8 | 2308.04665 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.9.bib | https://aclanthology.org/2023.emnlp-demo.9/ | @inproceedings{zhu-etal-2023-convlab,
title = "{C}onv{L}ab-3: A Flexible Dialogue System Toolkit Based on a Unified Data Format",
author = "Zhu, Qi and
Geishauser, Christian and
Lin, Hsien-chin and
van Niekerk, Carel and
Peng, Baolin and
Zhang, Zheng and
Feng, Shutong and
Heck, Michael and
Lubis, Nurul and
Wan, Dazhen and
Zhu, Xiaochen and
Gao, Jianfeng and
Gasic, Milica and
Huang, Minlie",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.9",
doi = "10.18653/v1/2023.emnlp-demo.9",
pages = "106--123",
abstract = "Task-oriented dialogue (TOD) systems function as digital assistants, guiding users through various tasks such as booking flights or finding restaurants. Existing toolkits for building TOD systems often fall short in delivering comprehensive arrays of data, model, and experimental environments with a user-friendly experience. We introduce ConvLab-3: a multifaceted dialogue system toolkit crafted to bridge this gap. Our unified data format simplifies the integration of diverse datasets and models, significantly reducing complexity and cost for studying generalization and transfer. Enhanced with robust reinforcement learning (RL) tools, featuring a streamlined training process, in-depth evaluation tools, and a selection of user simulators, ConvLab-3 supports the rapid development and evaluation of robust dialogue policies. Through an extensive study, we demonstrate the efficacy of transfer learning and RL and showcase that ConvLab-3 is not only a powerful tool for seasoned researchers but also an accessible platform for newcomers.",
}
| Task-oriented dialogue (TOD) systems function as digital assistants, guiding users through various tasks such as booking flights or finding restaurants. Existing toolkits for building TOD systems often fall short in delivering comprehensive arrays of data, model, and experimental environments with a user-friendly experience. We introduce ConvLab-3: a multifaceted dialogue system toolkit crafted to bridge this gap. Our unified data format simplifies the integration of diverse datasets and models, significantly reducing complexity and cost for studying generalization and transfer. Enhanced with robust reinforcement learning (RL) tools, featuring a streamlined training process, in-depth evaluation tools, and a selection of user simulators, ConvLab-3 supports the rapid development and evaluation of robust dialogue policies. Through an extensive study, we demonstrate the efficacy of transfer learning and RL and showcase that ConvLab-3 is not only a powerful tool for seasoned researchers but also an accessible platform for newcomers. | [
"Zhu, Qi",
"Geishauser, Christian",
"Lin, Hsien-chin",
"van Niekerk, Carel",
"Peng, Baolin",
"Zhang, Zheng",
"Feng, Shutong",
"Heck, Michael",
"Lubis, Nurul",
"Wan, Dazhen",
"Zhu, Xiaochen",
"Gao, Jianfeng",
"Gasic, Milica",
"Huang, Minlie"
] | ConvLab-3: A Flexible Dialogue System Toolkit Based on a Unified Data Format | emnlp-demo.9 | 2211.17148 | [
"https://github.com/convlab/convlab-3"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.10.bib | https://aclanthology.org/2023.emnlp-demo.10/ | @inproceedings{fatahi-bayat-etal-2023-fleek,
title = "{FLEEK}: Factual Error Detection and Correction with Evidence Retrieved from External Knowledge",
author = "Fatahi Bayat, Farima and
Qian, Kun and
Han, Benjamin and
Sang, Yisi and
Belyy, Anton and
Khorshidi, Samira and
Wu, Fei and
Ilyas, Ihab and
Li, Yunyao",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.10",
doi = "10.18653/v1/2023.emnlp-demo.10",
pages = "124--130",
abstract = "Detecting factual errors of textual information, whether generated by large language models (LLM) or curated by humans, is crucial for making informed decisions. LLMs{'} inability to attribute their claims to external knowledge and their tendency to hallucinate makes it difficult to rely on their responses. Humans, too, are prone to factual errors in their writing. Since manual detection and correction of factual er- rors is labor-intensive, developing an automatic approach can greatly reduce human effort. We present a prototype tool that automatically extracts factual claims from text, gathers evidence from external knowledge sources, evaluates the factuality of each claim, and suggests revisions for identified errors using the collected evidence. Initial empirical evaluation on fact error detection (77-85{\%} F1) shows the potential of our tool.",
}
| Detecting factual errors of textual information, whether generated by large language models (LLM) or curated by humans, is crucial for making informed decisions. LLMs{'} inability to attribute their claims to external knowledge and their tendency to hallucinate makes it difficult to rely on their responses. Humans, too, are prone to factual errors in their writing. Since manual detection and correction of factual er- rors is labor-intensive, developing an automatic approach can greatly reduce human effort. We present a prototype tool that automatically extracts factual claims from text, gathers evidence from external knowledge sources, evaluates the factuality of each claim, and suggests revisions for identified errors using the collected evidence. Initial empirical evaluation on fact error detection (77-85{\%} F1) shows the potential of our tool. | [
"Fatahi Bayat, Farima",
"Qian, Kun",
"Han, Benjamin",
"Sang, Yisi",
"Belyy, Anton",
"Khorshidi, Samira",
"Wu, Fei",
"Ilyas, Ihab",
"Li, Yunyao"
] | FLEEK: Factual Error Detection and Correction with Evidence Retrieved from External Knowledge | emnlp-demo.10 | 2310.17119 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.11.bib | https://aclanthology.org/2023.emnlp-demo.11/ | @inproceedings{wang-etal-2023-yato,
title = "{YATO}: Yet Another deep learning based Text analysis Open toolkit",
author = "Wang, Zeqiang and
Wang, Yile and
Wu, Jiageng and
Teng, Zhiyang and
Yang, Jie",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.11",
doi = "10.18653/v1/2023.emnlp-demo.11",
pages = "131--139",
abstract = "We introduce YATO, an open-source, easy-to-use toolkit for text analysis with deep learning. Different from existing heavily engineered toolkits and platforms, YATO is lightweight and user-friendly for researchers from cross-disciplinary areas. Designed in a hierarchical structure, YATO supports free combinations of three types of widely used features including 1) traditional neural networks (CNN, RNN, etc.); 2) pre-trained language models (BERT, RoBERTa, ELECTRA, etc.); and 3) user-customized neural features via a simple configurable file. Benefiting from the advantages of flexibility and ease of use, YATO can facilitate fast reproduction and refinement of state-of-the-art NLP models, and promote the cross-disciplinary applications of NLP techniques. The code, examples, and documentation are publicly available at https://github.com/jiesutd/YATO. A demo video is also available at https://www.youtube.com/playlist?list=PLJ0mhzMcRuDUlTkzBfAftOqiJRxYTTjXH.",
}
| We introduce YATO, an open-source, easy-to-use toolkit for text analysis with deep learning. Different from existing heavily engineered toolkits and platforms, YATO is lightweight and user-friendly for researchers from cross-disciplinary areas. Designed in a hierarchical structure, YATO supports free combinations of three types of widely used features including 1) traditional neural networks (CNN, RNN, etc.); 2) pre-trained language models (BERT, RoBERTa, ELECTRA, etc.); and 3) user-customized neural features via a simple configurable file. Benefiting from the advantages of flexibility and ease of use, YATO can facilitate fast reproduction and refinement of state-of-the-art NLP models, and promote the cross-disciplinary applications of NLP techniques. The code, examples, and documentation are publicly available at https://github.com/jiesutd/YATO. A demo video is also available at https://www.youtube.com/playlist?list=PLJ0mhzMcRuDUlTkzBfAftOqiJRxYTTjXH. | [
"Wang, Zeqiang",
"Wang, Yile",
"Wu, Jiageng",
"Teng, Zhiyang",
"Yang, Jie"
] | YATO: Yet Another deep learning based Text analysis Open toolkit | emnlp-demo.11 | 2209.13877 | [
"https://github.com/jiesutd/yato"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.12.bib | https://aclanthology.org/2023.emnlp-demo.12/ | @inproceedings{akiki-etal-2023-spacerini,
title = "Spacerini: Plug-and-play Search Engines with Pyserini and Hugging Face",
author = "Akiki, Christopher and
Ogundepo, Odunayo and
Piktus, Aleksandra and
Zhang, Xinyu and
Oladipo, Akintunde and
Lin, Jimmy and
Potthast, Martin",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.12",
doi = "10.18653/v1/2023.emnlp-demo.12",
pages = "140--148",
abstract = "We present Spacerini, a tool that integrates the Pyserini toolkit for reproducible information retrieval research with Hugging Face to enable the seamless construction and deployment of interactive search engines. Spacerini makes state-of-the-art sparse and dense retrieval models more accessible to non-IR practitioners while minimizing deployment effort. This is useful for NLP researchers who want to better understand and validate their research by performing qualitative analyses of training corpora, for IR researchers who want to demonstrate new retrieval models integrated into the growing Pyserini ecosystem, and for third parties reproducing the work of other researchers. Spacerini is open source and includes utilities for loading, preprocessing, indexing, and deploying search engines locally and remotely. We demonstrate a portfolio of 13 search engines created with Spacerini for different use cases.",
}
| We present Spacerini, a tool that integrates the Pyserini toolkit for reproducible information retrieval research with Hugging Face to enable the seamless construction and deployment of interactive search engines. Spacerini makes state-of-the-art sparse and dense retrieval models more accessible to non-IR practitioners while minimizing deployment effort. This is useful for NLP researchers who want to better understand and validate their research by performing qualitative analyses of training corpora, for IR researchers who want to demonstrate new retrieval models integrated into the growing Pyserini ecosystem, and for third parties reproducing the work of other researchers. Spacerini is open source and includes utilities for loading, preprocessing, indexing, and deploying search engines locally and remotely. We demonstrate a portfolio of 13 search engines created with Spacerini for different use cases. | [
"Akiki, Christopher",
"Ogundepo, Odunayo",
"Piktus, Aleks",
"ra",
"Zhang, Xinyu",
"Oladipo, Akintunde",
"Lin, Jimmy",
"Potthast, Martin"
] | Spacerini: Plug-and-play Search Engines with Pyserini and Hugging Face | emnlp-demo.12 | 2302.14534 | [
"https://github.com/castorini/hf-spacerini"
] | https://huggingface.co/papers/2302.14534 | 2 | 0 | 1 | 7 | [] | [
"society-ethics/papers"
] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.13.bib | https://aclanthology.org/2023.emnlp-demo.13/ | @inproceedings{poth-etal-2023-adapters,
title = "Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning",
author = {Poth, Clifton and
Sterz, Hannah and
Paul, Indraneil and
Purkayastha, Sukannya and
Engl{\"a}nder, Leon and
Imhof, Timo and
Vuli{\'c}, Ivan and
Ruder, Sebastian and
Gurevych, Iryna and
Pfeiffer, Jonas},
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.13",
doi = "10.18653/v1/2023.emnlp-demo.13",
pages = "149--160",
abstract = "We introduce Adapters, an open-source library that unifies parameter-efficient and modular transfer learning in large language models. By integrating 10 diverse adapter methods into a unified interface, Adapters offers ease of use and flexible configuration. Our library allows researchers and practitioners to leverage adapter modularity through composition blocks, enabling the design of complex adapter setups. We demonstrate the library{'}s efficacy by evaluating its performance against full fine-tuning on various NLP tasks. Adapters provides a powerful tool for addressing the challenges of conventional fine-tuning paradigms and promoting more efficient and modular transfer learning. The library is available via https://adapterhub.ml/adapters.",
}
| We introduce Adapters, an open-source library that unifies parameter-efficient and modular transfer learning in large language models. By integrating 10 diverse adapter methods into a unified interface, Adapters offers ease of use and flexible configuration. Our library allows researchers and practitioners to leverage adapter modularity through composition blocks, enabling the design of complex adapter setups. We demonstrate the library{'}s efficacy by evaluating its performance against full fine-tuning on various NLP tasks. Adapters provides a powerful tool for addressing the challenges of conventional fine-tuning paradigms and promoting more efficient and modular transfer learning. The library is available via https://adapterhub.ml/adapters. | [
"Poth, Clifton",
"Sterz, Hannah",
"Paul, Indraneil",
"Purkayastha, Sukannya",
"Engl{\\\"a}nder, Leon",
"Imhof, Timo",
"Vuli{\\'c}, Ivan",
"Ruder, Sebastian",
"Gurevych, Iryna",
"Pfeiffer, Jonas"
] | Adapters: A Unified Library for Parameter-Efficient and Modular Transfer Learning | emnlp-demo.13 | 2311.11077 | [
"https://github.com/adapter-hub/adapters"
] | https://huggingface.co/papers/2311.11077 | 7 | 24 | 3 | 10 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.14.bib | https://aclanthology.org/2023.emnlp-demo.14/ | @inproceedings{yang-etal-2023-intelmo,
title = "{INTELMO}: Enhancing Models{'} Adoption of Interactive Interfaces",
author = "Yang, Chunxu and
Wu, Chien-Sheng and
Murakhovs{'}ka, Lidiya and
Laban, Philippe and
Chen, Xiang",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.14",
doi = "10.18653/v1/2023.emnlp-demo.14",
pages = "161--166",
abstract = "This paper presents INTELMO, an easy-to-use library to help model developers adopt user-faced interactive interfaces and articles from real-time RSS sources for their language models. The library categorizes common NLP tasks and provides default style patterns, streamlining the process of creating interfaces with minimal code modifications while ensuring an intuitive user experience. Moreover, INTELMO employs a multi-granular hierarchical abstraction to provide developers with fine-grained and flexible control over user interfaces. INTELMO is under active development, with document available at \url{https://intelmo.github.io}.",
}
| This paper presents INTELMO, an easy-to-use library to help model developers adopt user-faced interactive interfaces and articles from real-time RSS sources for their language models. The library categorizes common NLP tasks and provides default style patterns, streamlining the process of creating interfaces with minimal code modifications while ensuring an intuitive user experience. Moreover, INTELMO employs a multi-granular hierarchical abstraction to provide developers with fine-grained and flexible control over user interfaces. INTELMO is under active development, with document available at \url{https://intelmo.github.io}. | [
"Yang, Chunxu",
"Wu, Chien-Sheng",
"Murakhovs{'}ka, Lidiya",
"Laban, Philippe",
"Chen, Xiang"
] | INTELMO: Enhancing Models' Adoption of Interactive Interfaces | emnlp-demo.14 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.15.bib | https://aclanthology.org/2023.emnlp-demo.15/ | @inproceedings{wang-etal-2023-humanoid,
title = "Humanoid Agents: Platform for Simulating Human-like Generative Agents",
author = "Wang, Zhilin and
Chiu, Yu Ying and
Chiu, Yu Cheung",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.15",
doi = "10.18653/v1/2023.emnlp-demo.15",
pages = "167--176",
abstract = "Just as computational simulations of atoms, molecules and cells have shaped the way we study the sciences, true-to-life simulations of human-like agents can be valuable tools for studying human behavior. We propose Humanoid Agents, a system that guides Generative Agents to behave more like humans by introducing three elements of System 1 processing: Basic needs (e.g. hunger, health and energy), Emotion and Closeness in Relationships. Humanoid Agents are able to use these dynamic elements to adapt their daily activities and conversations with other agents, as supported with empirical experiments. Our system is designed to be extensible to various settings, three of which we demonstrate, as well as to other elements influencing human behavior (e.g. empathy, moral values and cultural background). Our platform also includes a Unity WebGL game interface for visualization and an interactive analytics dashboard to show agent statuses over time. Our platform is available on https://www.humanoidagents.com/ and code is on https://github.com/HumanoidAgents/HumanoidAgents",
}
| Just as computational simulations of atoms, molecules and cells have shaped the way we study the sciences, true-to-life simulations of human-like agents can be valuable tools for studying human behavior. We propose Humanoid Agents, a system that guides Generative Agents to behave more like humans by introducing three elements of System 1 processing: Basic needs (e.g. hunger, health and energy), Emotion and Closeness in Relationships. Humanoid Agents are able to use these dynamic elements to adapt their daily activities and conversations with other agents, as supported with empirical experiments. Our system is designed to be extensible to various settings, three of which we demonstrate, as well as to other elements influencing human behavior (e.g. empathy, moral values and cultural background). Our platform also includes a Unity WebGL game interface for visualization and an interactive analytics dashboard to show agent statuses over time. Our platform is available on https://www.humanoidagents.com/ and code is on https://github.com/HumanoidAgents/HumanoidAgents | [
"Wang, Zhilin",
"Chiu, Yu Ying",
"Chiu, Yu Cheung"
] | Humanoid Agents: Platform for Simulating Human-like Generative Agents | emnlp-demo.15 | 2310.05418 | [
"https://github.com/humanoidagents/humanoidagents"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.16.bib | https://aclanthology.org/2023.emnlp-demo.16/ | @inproceedings{wu-etal-2023-tp,
title = "{TP}-Detector: Detecting Turning Points in the Engineering Process of Large-scale Projects",
author = "Wu, Qi and
Chao, WenHan and
Zhou, Xian and
Luo, Zhunchen",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.16",
doi = "10.18653/v1/2023.emnlp-demo.16",
pages = "177--185",
abstract = "This paper introduces a novel task of detecting turning points in the engineering process of large-scale projects, wherein the turning points signify significant transitions occurring between phases. Given the complexities involving diverse critical events and limited comprehension in individual news reports, we approach the problem by treating the sequence of related news streams as a window with multiple instances. To capture the evolution of changes effectively, we adopt a deep Multiple Instance Learning (MIL) framework and employ the multiple instance ranking loss to discern the transition patterns exhibited in the turning point window. Extensive experiments consistently demonstrate the effectiveness of our proposed approach on the constructed dataset compared to baseline methods. We deployed the proposed mode and provided a demonstration video to illustrate its functionality. The code and dataset are available on GitHub.",
}
| This paper introduces a novel task of detecting turning points in the engineering process of large-scale projects, wherein the turning points signify significant transitions occurring between phases. Given the complexities involving diverse critical events and limited comprehension in individual news reports, we approach the problem by treating the sequence of related news streams as a window with multiple instances. To capture the evolution of changes effectively, we adopt a deep Multiple Instance Learning (MIL) framework and employ the multiple instance ranking loss to discern the transition patterns exhibited in the turning point window. Extensive experiments consistently demonstrate the effectiveness of our proposed approach on the constructed dataset compared to baseline methods. We deployed the proposed mode and provided a demonstration video to illustrate its functionality. The code and dataset are available on GitHub. | [
"Wu, Qi",
"Chao, WenHan",
"Zhou, Xian",
"Luo, Zhunchen"
] | TP-Detector: Detecting Turning Points in the Engineering Process of Large-scale Projects | emnlp-demo.16 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.17.bib | https://aclanthology.org/2023.emnlp-demo.17/ | @inproceedings{li-etal-2023-cleva,
title = "{CLEVA}: {C}hinese Language Models {EVA}luation Platform",
author = "Li, Yanyang and
Zhao, Jianqiao and
Zheng, Duo and
Hu, Zi-Yuan and
Chen, Zhi and
Su, Xiaohui and
Huang, Yongfeng and
Huang, Shijia and
Lin, Dahua and
Lyu, Michael and
Wang, Liwei",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.17",
doi = "10.18653/v1/2023.emnlp-demo.17",
pages = "186--217",
abstract = "With the continuous emergence of Chinese Large Language Models (LLMs), how to evaluate a model{'}s capabilities has become an increasingly significant issue. The absence of a comprehensive Chinese benchmark that thoroughly assesses a model{'}s performance, the unstandardized and incomparable prompting procedure, and the prevalent risk of contamination pose major challenges in the current evaluation of Chinese LLMs. We present CLEVA, a user-friendly platform crafted to holistically evaluate Chinese LLMs. Our platform employs a standardized workflow to assess LLMs{'} performance across various dimensions, regularly updating a competitive leaderboard. To alleviate contamination, CLEVA curates a significant proportion of new data and develops a sampling strategy that guarantees a unique subset for each leaderboard round. Empowered by an easy-to-use interface that requires just a few mouse clicks and a model API, users can conduct a thorough evaluation with minimal coding. Large-scale experiments featuring 23 Chinese LLMs have validated CLEVA{'}s efficacy.",
}
| With the continuous emergence of Chinese Large Language Models (LLMs), how to evaluate a model{'}s capabilities has become an increasingly significant issue. The absence of a comprehensive Chinese benchmark that thoroughly assesses a model{'}s performance, the unstandardized and incomparable prompting procedure, and the prevalent risk of contamination pose major challenges in the current evaluation of Chinese LLMs. We present CLEVA, a user-friendly platform crafted to holistically evaluate Chinese LLMs. Our platform employs a standardized workflow to assess LLMs{'} performance across various dimensions, regularly updating a competitive leaderboard. To alleviate contamination, CLEVA curates a significant proportion of new data and develops a sampling strategy that guarantees a unique subset for each leaderboard round. Empowered by an easy-to-use interface that requires just a few mouse clicks and a model API, users can conduct a thorough evaluation with minimal coding. Large-scale experiments featuring 23 Chinese LLMs have validated CLEVA{'}s efficacy. | [
"Li, Yanyang",
"Zhao, Jianqiao",
"Zheng, Duo",
"Hu, Zi-Yuan",
"Chen, Zhi",
"Su, Xiaohui",
"Huang, Yongfeng",
"Huang, Shijia",
"Lin, Dahua",
"Lyu, Michael",
"Wang, Liwei"
] | CLEVA: Chinese Language Models EVAluation Platform | emnlp-demo.17 | 2308.04813 | [
"https://github.com/lavi-lab/cleva"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.18.bib | https://aclanthology.org/2023.emnlp-demo.18/ | @inproceedings{lohr-hahn-2023-dopa,
title = "{DOPA} {METER} {--} A Tool Suite for Metrical Document Profiling and Aggregation",
author = "Lohr, Christina and
Hahn, Udo",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.18",
doi = "10.18653/v1/2023.emnlp-demo.18",
pages = "218--228",
abstract = "We present DOPA METER, a tool suite for the metrical investigation of written language, that provides diagnostic means for its division into discourse categories, such as registers, genres, and style. The quantitative basis of our system are 120 metrics covering a wide range of lexical, syntactic, and semantic features relevant for language profiling. The scores can be summarized, compared, and aggregated using visualization tools that can be tailored according to the users{'} needs. We also showcase an application scenario for DOPA METER.",
}
| We present DOPA METER, a tool suite for the metrical investigation of written language, that provides diagnostic means for its division into discourse categories, such as registers, genres, and style. The quantitative basis of our system are 120 metrics covering a wide range of lexical, syntactic, and semantic features relevant for language profiling. The scores can be summarized, compared, and aggregated using visualization tools that can be tailored according to the users{'} needs. We also showcase an application scenario for DOPA METER. | [
"Lohr, Christina",
"Hahn, Udo"
] | DOPA METER – A Tool Suite for Metrical Document Profiling and Aggregation | emnlp-demo.18 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.19.bib | https://aclanthology.org/2023.emnlp-demo.19/ | @inproceedings{tillmann-etal-2023-muted,
title = "Muted: Multilingual Targeted Offensive Speech Identification and Visualization",
author = "Tillmann, Christoph and
Trivedi, Aashka and
Rosenthal, Sara and
Borse, Santosh and
Zhang, Rong and
Sil, Avirup and
Bhattacharjee, Bishwaranjan",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.19",
doi = "10.18653/v1/2023.emnlp-demo.19",
pages = "229--236",
abstract = "Offensive language such as hate, abuse, and profanity (HAP) occurs in various content on the web. While previous work has mostly dealt with sentence level annotations, there have been a few recent attempts to identify offensive spans as well. We build upon this work and introduce MUTED, a system to identify multilingual HAP content by displaying offensive arguments and their targets using heat maps to indicate their intensity. MUTED can leverage any transformer-based HAP-classification model and its attention mechanism out-of-the-box to identify toxic spans, without further fine-tuning. In addition, we use the spaCy library to identify the specific targets and arguments for the words predicted by the attention heatmaps. We present the model{'}s performance on identifying offensive spans and their targets in existing datasets and present new annotations on German text. Finally, we demonstrate our proposed visualization tool on multilingual inputs.",
}
| Offensive language such as hate, abuse, and profanity (HAP) occurs in various content on the web. While previous work has mostly dealt with sentence level annotations, there have been a few recent attempts to identify offensive spans as well. We build upon this work and introduce MUTED, a system to identify multilingual HAP content by displaying offensive arguments and their targets using heat maps to indicate their intensity. MUTED can leverage any transformer-based HAP-classification model and its attention mechanism out-of-the-box to identify toxic spans, without further fine-tuning. In addition, we use the spaCy library to identify the specific targets and arguments for the words predicted by the attention heatmaps. We present the model{'}s performance on identifying offensive spans and their targets in existing datasets and present new annotations on German text. Finally, we demonstrate our proposed visualization tool on multilingual inputs. | [
"Tillmann, Christoph",
"Trivedi, Aashka",
"Rosenthal, Sara",
"Borse, Santosh",
"Zhang, Rong",
"Sil, Avirup",
"Bhattacharjee, Bishwaranjan"
] | Muted: Multilingual Targeted Offensive Speech Identification and Visualization | emnlp-demo.19 | 2312.11344 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.20.bib | https://aclanthology.org/2023.emnlp-demo.20/ | @inproceedings{xu-etal-2023-gentopia,
title = "{G}entopia.{AI}: A Collaborative Platform for Tool-Augmented {LLM}s",
author = "Xu, Binfeng and
Liu, Xukun and
Shen, Hua and
Han, Zeyu and
Li, Yuhan and
Yue, Murong and
Peng, Zhiyuan and
Liu, Yuchen and
Yao, Ziyu and
Xu, Dongkuan",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.20",
doi = "10.18653/v1/2023.emnlp-demo.20",
pages = "237--245",
abstract = "Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. This paper proposes Gentopia, a lightweight and extensible framework for ALMs. Gentopia allows the flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish Gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in Gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, Gentbench, an integral component of Gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release Gentopia on Github and will continuously move forward.",
}
| Augmented Language Models (ALMs) empower large language models with the ability to use tools, transforming them into intelligent agents for real-world interactions. However, most existing frameworks for ALMs, to varying degrees, are deficient in the following critical features: flexible customization, collaborative democratization, and holistic evaluation. This paper proposes Gentopia, a lightweight and extensible framework for ALMs. Gentopia allows the flexible customization of agents through simple configurations, seamlessly integrating various language models, task formats, prompting modules, and plugins into a unified paradigm. Furthermore, we establish Gentpool, a public platform enabling the registration and sharing of user-customized agents. Agents registered in Gentpool are composable such that they can be assembled together for agent collaboration, advancing the democratization of artificial intelligence. To ensure high-quality agents, Gentbench, an integral component of Gentpool, is designed to thoroughly evaluate user-customized agents across diverse aspects such as safety, robustness, efficiency, etc. We release Gentopia on Github and will continuously move forward. | [
"Xu, Binfeng",
"Liu, Xukun",
"Shen, Hua",
"Han, Zeyu",
"Li, Yuhan",
"Yue, Murong",
"Peng, Zhiyuan",
"Liu, Yuchen",
"Yao, Ziyu",
"Xu, Dongkuan"
] | Gentopia.AI: A Collaborative Platform for Tool-Augmented LLMs | emnlp-demo.20 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.21.bib | https://aclanthology.org/2023.emnlp-demo.21/ | @inproceedings{yu-etal-2023-musicagent,
title = "{M}usic{A}gent: An {AI} Agent for Music Understanding and Generation with Large Language Models",
author = "Yu, Dingyao and
Song, Kaitao and
Lu, Peiling and
He, Tianyu and
Tan, Xu and
Ye, Wei and
Zhang, Shikun and
Bian, Jiang",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.21",
doi = "10.18653/v1/2023.emnlp-demo.21",
pages = "246--255",
abstract = "AI-empowered music processing is a diverse feld that encompasses dozens of tasks, ranging from generation tasks (e.g., timbre synthesis) to comprehension tasks (e.g., music classifcation). For developers and amateurs, it is very diffcult to grasp all of these task to satisfy their requirements in music processing, especially considering the huge differences in the representations of music data and the model applicability across platforms among various tasks. Consequently, it is necessary to build a system to organize and integrate these tasks, and thus help practitioners to automatically analyze their demand and call suitable tools as solutions to fulfill their requirements. Inspired by the recent success of large language models (LLMs) in task automation, we develop a system, named MusicAgent, which integrates numerous music-related tools and an autonomous workflow to address user requirements. More specifically, we build 1) toolset that collects tools from diverse sources, including Hugging Face, GitHub, and Web API, etc. 2) an autonomous workflow empowered by LLMs (e.g., ChatGPT) to organize these tools and automatically decompose user requests into multiple sub-tasks and invoke corresponding music tools. The primary goal of this system is to free users from the intricacies of AI-music tools, enabling them to concentrate on the creative aspect. By granting users the freedom to effortlessly combine tools, the system offers a seamless and enriching music experience. The code is available on GitHub along with a brief instructional video.",
}
| AI-empowered music processing is a diverse feld that encompasses dozens of tasks, ranging from generation tasks (e.g., timbre synthesis) to comprehension tasks (e.g., music classifcation). For developers and amateurs, it is very diffcult to grasp all of these task to satisfy their requirements in music processing, especially considering the huge differences in the representations of music data and the model applicability across platforms among various tasks. Consequently, it is necessary to build a system to organize and integrate these tasks, and thus help practitioners to automatically analyze their demand and call suitable tools as solutions to fulfill their requirements. Inspired by the recent success of large language models (LLMs) in task automation, we develop a system, named MusicAgent, which integrates numerous music-related tools and an autonomous workflow to address user requirements. More specifically, we build 1) toolset that collects tools from diverse sources, including Hugging Face, GitHub, and Web API, etc. 2) an autonomous workflow empowered by LLMs (e.g., ChatGPT) to organize these tools and automatically decompose user requests into multiple sub-tasks and invoke corresponding music tools. The primary goal of this system is to free users from the intricacies of AI-music tools, enabling them to concentrate on the creative aspect. By granting users the freedom to effortlessly combine tools, the system offers a seamless and enriching music experience. The code is available on GitHub along with a brief instructional video. | [
"Yu, Dingyao",
"Song, Kaitao",
"Lu, Peiling",
"He, Tianyu",
"Tan, Xu",
"Ye, Wei",
"Zhang, Shikun",
"Bian, Jiang"
] | MusicAgent: An AI Agent for Music Understanding and Generation with Large Language Models | emnlp-demo.21 | 2310.11954 | [
"https://github.com/microsoft/muzic"
] | https://huggingface.co/papers/2310.11954 | 5 | 24 | 2 | 8 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.22.bib | https://aclanthology.org/2023.emnlp-demo.22/ | @inproceedings{steingrimsson-etal-2023-sentalign,
title = "{S}ent{A}lign: Accurate and Scalable Sentence Alignment",
author = "Steingrimsson, Steinthor and
Loftsson, Hrafn and
Way, Andy",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.22",
doi = "10.18653/v1/2023.emnlp-demo.22",
pages = "256--263",
abstract = "We present SentAlign, an accurate sentence alignment tool designed to handle very large parallel document pairs. Given user-defined parameters, the alignment algorithm evaluates all possible alignment paths in fairly large documents of thousands of sentences and uses a divide-and-conquer approach to align documents containing tens of thousands of sentences. The scoring function is based on LaBSE bilingual sentence representations. SentAlign outperforms five other sentence alignment tools when evaluated on two different evaluation sets, German-French and English-Icelandic, and on a downstream machine translation task.",
}
| We present SentAlign, an accurate sentence alignment tool designed to handle very large parallel document pairs. Given user-defined parameters, the alignment algorithm evaluates all possible alignment paths in fairly large documents of thousands of sentences and uses a divide-and-conquer approach to align documents containing tens of thousands of sentences. The scoring function is based on LaBSE bilingual sentence representations. SentAlign outperforms five other sentence alignment tools when evaluated on two different evaluation sets, German-French and English-Icelandic, and on a downstream machine translation task. | [
"Steingrimsson, Steinthor",
"Loftsson, Hrafn",
"Way, Andy"
] | SentAlign: Accurate and Scalable Sentence Alignment | emnlp-demo.22 | 2311.08982 | [
"https://github.com/steinst/sentalign"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.23.bib | https://aclanthology.org/2023.emnlp-demo.23/ | @inproceedings{pan-etal-2023-qacheck,
title = "{QAC}heck: A Demonstration System for Question-Guided Multi-Hop Fact-Checking",
author = "Pan, Liangming and
Lu, Xinyuan and
Kan, Min-Yen and
Nakov, Preslav",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.23",
doi = "10.18653/v1/2023.emnlp-demo.23",
pages = "264--273",
abstract = "Fact-checking real-world claims often requires intricate, multi-step reasoning due to the absence of direct evidence to support or refute them. However, existing fact-checking systems often lack transparency in their decision-making, making it challenging for users to comprehend their reasoning process. To address this, we propose the Question-guided Multi-hop Fact-Checking (QACheck) system, which guides the model{'}s reasoning process by asking a series of questions critical for verifying a claim. QACheck has five key modules: a claim verifier, a question generator, a question-answering module, a QA validator, and a reasoner. Users can input a claim into QACheck, which then predicts its veracity and provides a comprehensive report detailing its reasoning process, guided by a sequence of (question, answer) pairs. QACheck also provides the source of evidence supporting each question, fostering a transparent, explainable, and user-friendly fact-checking process.",
}
| Fact-checking real-world claims often requires intricate, multi-step reasoning due to the absence of direct evidence to support or refute them. However, existing fact-checking systems often lack transparency in their decision-making, making it challenging for users to comprehend their reasoning process. To address this, we propose the Question-guided Multi-hop Fact-Checking (QACheck) system, which guides the model{'}s reasoning process by asking a series of questions critical for verifying a claim. QACheck has five key modules: a claim verifier, a question generator, a question-answering module, a QA validator, and a reasoner. Users can input a claim into QACheck, which then predicts its veracity and provides a comprehensive report detailing its reasoning process, guided by a sequence of (question, answer) pairs. QACheck also provides the source of evidence supporting each question, fostering a transparent, explainable, and user-friendly fact-checking process. | [
"Pan, Liangming",
"Lu, Xinyuan",
"Kan, Min-Yen",
"Nakov, Preslav"
] | QACheck: A Demonstration System for Question-Guided Multi-Hop Fact-Checking | emnlp-demo.23 | 2310.07609 | [
"https://github.com/xinyuanlu00/qacheck"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.24.bib | https://aclanthology.org/2023.emnlp-demo.24/ | @inproceedings{boreshban-etal-2023-robustqa,
title = "{R}obust{QA}: A Framework for Adversarial Text Generation Analysis on Question Answering Systems",
author = "Boreshban, Yasaman and
Mirbostani, Seyed Morteza and
Ahmadi, Seyedeh Fatemeh and
Shojaee, Gita and
Kamani, Fatemeh and
Ghassem-Sani, Gholamreza and
Mirroshandel, Seyed Abolghasem",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.24",
doi = "10.18653/v1/2023.emnlp-demo.24",
pages = "274--285",
abstract = "Question answering (QA) systems have reached human-level accuracy; however, these systems are not robust enough and are vulnerable to adversarial examples. Recently, adversarial attacks have been widely investigated in text classification. However, there have been few research efforts on this topic in QA. In this article, we have modified the attack algorithms widely used in text classification to fit those algorithms for QA systems. We have evaluated the impact of various attack methods on QA systems at character, word, and sentence levels. Furthermore, we have developed a new framework, named RobustQA, as the first open-source toolkit for investigating textual adversarial attacks in QA systems. RobustQA consists of seven modules: Tokenizer, Victim Model, Goals, Metrics, Attacker, Attack Selector, and Evaluator. It currently supports six different attack algorithms. Furthermore, the framework simplifies the development of new attack algorithms in QA. The source code and documentation of RobustQA are available at https://github.com/mirbostani/RobustQA.",
}
| Question answering (QA) systems have reached human-level accuracy; however, these systems are not robust enough and are vulnerable to adversarial examples. Recently, adversarial attacks have been widely investigated in text classification. However, there have been few research efforts on this topic in QA. In this article, we have modified the attack algorithms widely used in text classification to fit those algorithms for QA systems. We have evaluated the impact of various attack methods on QA systems at character, word, and sentence levels. Furthermore, we have developed a new framework, named RobustQA, as the first open-source toolkit for investigating textual adversarial attacks in QA systems. RobustQA consists of seven modules: Tokenizer, Victim Model, Goals, Metrics, Attacker, Attack Selector, and Evaluator. It currently supports six different attack algorithms. Furthermore, the framework simplifies the development of new attack algorithms in QA. The source code and documentation of RobustQA are available at https://github.com/mirbostani/RobustQA. | [
"Boreshban, Yasaman",
"Mirbostani, Seyed Morteza",
"Ahmadi, Seyedeh Fatemeh",
"Shojaee, Gita",
"Kamani, Fatemeh",
"Ghassem-Sani, Gholamreza",
"Mirrosh",
"el, Seyed Abolghasem"
] | RobustQA: A Framework for Adversarial Text Generation Analysis on Question Answering Systems | emnlp-demo.24 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.25.bib | https://aclanthology.org/2023.emnlp-demo.25/ | @inproceedings{razzhigaev-etal-2023-kandinsky,
title = "Kandinsky: An Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion",
author = "Razzhigaev, Anton and
Shakhmatov, Arseniy and
Maltseva, Anastasia and
Arkhipkin, Vladimir and
Pavlov, Igor and
Ryabov, Ilya and
Kuts, Angelina and
Panchenko, Alexander and
Kuznetsov, Andrey and
Dimitrov, Denis",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.25",
doi = "10.18653/v1/2023.emnlp-demo.25",
pages = "286--295",
abstract = "Text-to-image generation is a significant domain in modern computer vision and achieved substantial improvements through the evolution of generative architectures. Among these, diffusion-based models demonstrated essential quality enhancements. These models generally split into two categories: pixel-level and latent-level approaches. We present Kandinsky {--} a novel exploration of latent diffusion architecture, combining the principles of image prior models with latent diffusion techniques. The image prior model, is trained separately to map CLIP text and image embeddings. Another distinct feature of the proposed model is the modified MoVQ implementation, which serves as the image autoencoder component. Overall the designed model contains 3.3B parameters. We also deployed a user-friendly demo system that supports diverse generative modes such as text-to-image generation, image fusion, text and image fusion, image variations generation and text-guided inpainting/outpainting. Additionally we released the source code and checkpoints for Kandinsky models. Experimental evaluations demonstrate FID score of 8.03 on the COCO-30K dataset, marking our model as the top open source performer in terms of measurable image generation quality.",
}
| Text-to-image generation is a significant domain in modern computer vision and achieved substantial improvements through the evolution of generative architectures. Among these, diffusion-based models demonstrated essential quality enhancements. These models generally split into two categories: pixel-level and latent-level approaches. We present Kandinsky {--} a novel exploration of latent diffusion architecture, combining the principles of image prior models with latent diffusion techniques. The image prior model, is trained separately to map CLIP text and image embeddings. Another distinct feature of the proposed model is the modified MoVQ implementation, which serves as the image autoencoder component. Overall the designed model contains 3.3B parameters. We also deployed a user-friendly demo system that supports diverse generative modes such as text-to-image generation, image fusion, text and image fusion, image variations generation and text-guided inpainting/outpainting. Additionally we released the source code and checkpoints for Kandinsky models. Experimental evaluations demonstrate FID score of 8.03 on the COCO-30K dataset, marking our model as the top open source performer in terms of measurable image generation quality. | [
"Razzhigaev, Anton",
"Shakhmatov, Arseniy",
"Maltseva, Anastasia",
"Arkhipkin, Vladimir",
"Pavlov, Igor",
"Ryabov, Ilya",
"Kuts, Angelina",
"Panchenko, Alex",
"er",
"Kuznetsov, Andrey",
"Dimitrov, Denis"
] | Kandinsky: An Improved Text-to-Image Synthesis with Image Prior and Latent Diffusion | emnlp-demo.25 | 2310.03502 | [
"https://github.com/ai-forever/Kandinsky-2"
] | https://huggingface.co/papers/2310.03502 | 4 | 77 | 5 | 10 | [] | [] | [
"hysts/Kandinsky-2-2",
"hysts/Kandinsky-2-1"
] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.26.bib | https://aclanthology.org/2023.emnlp-demo.26/ | @inproceedings{iana-etal-2023-newsreclib,
title = "{N}ews{R}ec{L}ib: A {P}y{T}orch-Lightning Library for Neural News Recommendation",
author = "Iana, Andreea and
Glava{\v{s}}, Goran and
Paulheim, Heiko",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.26",
doi = "10.18653/v1/2023.emnlp-demo.26",
pages = "296--310",
abstract = "NewsRecLib is an open-source library based on Pytorch-Lightning and Hydra developed for training and evaluating neural news recommendation models. The foremost goals of NewsRecLib are to promote reproducible research and rigorous experimental evaluation by (i) providing a unified and highly configurable framework for exhaustive experimental studies and (ii) enabling a thorough analysis of the performance contribution of different model architecture components and training regimes. NewsRecLib is highly modular, allows specifying experiments in a single configuration file, and includes extensive logging facilities. Moreover, NewsRecLib provides out-of-the-box implementations of several prominent neural models, training methods, standard evaluation benchmarks, and evaluation metrics for news recommendation.",
}
| NewsRecLib is an open-source library based on Pytorch-Lightning and Hydra developed for training and evaluating neural news recommendation models. The foremost goals of NewsRecLib are to promote reproducible research and rigorous experimental evaluation by (i) providing a unified and highly configurable framework for exhaustive experimental studies and (ii) enabling a thorough analysis of the performance contribution of different model architecture components and training regimes. NewsRecLib is highly modular, allows specifying experiments in a single configuration file, and includes extensive logging facilities. Moreover, NewsRecLib provides out-of-the-box implementations of several prominent neural models, training methods, standard evaluation benchmarks, and evaluation metrics for news recommendation. | [
"Iana, Andreea",
"Glava{\\v{s}}, Goran",
"Paulheim, Heiko"
] | NewsRecLib: A PyTorch-Lightning Library for Neural News Recommendation | emnlp-demo.26 | 2310.01146 | [
"https://github.com/andreeaiana/newsreclib"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.27.bib | https://aclanthology.org/2023.emnlp-demo.27/ | @inproceedings{rush-2023-minichain,
title = "{M}ini{C}hain: A Small Library for Coding with Large Language Models",
author = "Rush, Alexander",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.27",
doi = "10.18653/v1/2023.emnlp-demo.27",
pages = "311--317",
abstract = "Programming augmented by large language models (LLMs) opens up many new application areas, but also requires care. LLMs are accurate enough, on average, to replace core functionality, yet make basic mistakes that demonstrate a lack of robustness. An ecosystem of prompting tools, from intelligent agents to new programming languages, have emerged with different solutions for patching LLMs with other tools. In this work, we introduce MiniChain, an opinionated tool for LLM augmented programming, with the design goals of ease-of-use of prototyping, transparency through automatic visualization, and a minimalistic approach to advanced features. The MiniChain library provides core primitives for coding LLM calls, separating out prompt templates, and capturing program structure. The library includes demo implementations of the main applications papers in the area, including chat-bots, code generation, retrieval-based question answering, and complex information extraction. The library is open-source and available at https://github.com/srush/MiniChain, with code demos available at https://srush-minichain.hf.space/, and video demo at https://www.youtube.com/watch?v=VszZ1VnO7sk.",
}
| Programming augmented by large language models (LLMs) opens up many new application areas, but also requires care. LLMs are accurate enough, on average, to replace core functionality, yet make basic mistakes that demonstrate a lack of robustness. An ecosystem of prompting tools, from intelligent agents to new programming languages, have emerged with different solutions for patching LLMs with other tools. In this work, we introduce MiniChain, an opinionated tool for LLM augmented programming, with the design goals of ease-of-use of prototyping, transparency through automatic visualization, and a minimalistic approach to advanced features. The MiniChain library provides core primitives for coding LLM calls, separating out prompt templates, and capturing program structure. The library includes demo implementations of the main applications papers in the area, including chat-bots, code generation, retrieval-based question answering, and complex information extraction. The library is open-source and available at https://github.com/srush/MiniChain, with code demos available at https://srush-minichain.hf.space/, and video demo at https://www.youtube.com/watch?v=VszZ1VnO7sk. | [
"Rush, Alex",
"er"
] | MiniChain: A Small Library for Coding with Large Language Models | emnlp-demo.27 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.28.bib | https://aclanthology.org/2023.emnlp-demo.28/ | @inproceedings{lai-etal-2023-okapi,
title = "Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback",
author = "Lai, Viet and
Nguyen, Chien and
Ngo, Nghia and
Nguyen, Thuat and
Dernoncourt, Franck and
Rossi, Ryan and
Nguyen, Thien",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.28",
doi = "10.18653/v1/2023.emnlp-demo.28",
pages = "318--327",
abstract = "A key technology for large language models (LLMs) involves instruction tuning that helps align the models{'} responses with human expectations to realize impressive learning abilities. Two major approaches for instruction tuning characterize supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), which are applied to produce the best commercial LLMs. To improve the accessibility of LLMs, various instruction-tuned open-source LLMs have also been introduced recently. However, existing open-source LLMs have only been instruction-tuned for English and a few popular languages, thus hindering their accessibility to many other languages in the world. In addition, SFT has been used as the only approach to instruction-tune open-source LLMs for multiple languages. This has left a significant gap for fine-tuned LLMs based on RLHF in diverse languages and raised important questions on how RLHF can boost the performance of multilingual instruction tuning. To overcome this issue, we present Okapi, the first system with instruction-tuned LLMs based on RLHF for multiple languages. Okapi introduces instruction and response-ranked data in 26 diverse languages to facilitate the experiments and development of future multilingual LLM research. We also present benchmark datasets to enable the evaluation of generative LLMs in multiple languages. Our experiments demonstrate the advantages of RLHF for multilingual instruction over SFT for different base models and datasets. Our framework with created resources, fine-tuned LLMs, interaction scripts are released at https://github.com/nlp-uoregon/Okapi. A demo video to show our framework can also be found at: https://youtu.be/QFV2fkPwvi0.",
}
| A key technology for large language models (LLMs) involves instruction tuning that helps align the models{'} responses with human expectations to realize impressive learning abilities. Two major approaches for instruction tuning characterize supervised fine-tuning (SFT) and reinforcement learning from human feedback (RLHF), which are applied to produce the best commercial LLMs. To improve the accessibility of LLMs, various instruction-tuned open-source LLMs have also been introduced recently. However, existing open-source LLMs have only been instruction-tuned for English and a few popular languages, thus hindering their accessibility to many other languages in the world. In addition, SFT has been used as the only approach to instruction-tune open-source LLMs for multiple languages. This has left a significant gap for fine-tuned LLMs based on RLHF in diverse languages and raised important questions on how RLHF can boost the performance of multilingual instruction tuning. To overcome this issue, we present Okapi, the first system with instruction-tuned LLMs based on RLHF for multiple languages. Okapi introduces instruction and response-ranked data in 26 diverse languages to facilitate the experiments and development of future multilingual LLM research. We also present benchmark datasets to enable the evaluation of generative LLMs in multiple languages. Our experiments demonstrate the advantages of RLHF for multilingual instruction over SFT for different base models and datasets. Our framework with created resources, fine-tuned LLMs, interaction scripts are released at https://github.com/nlp-uoregon/Okapi. A demo video to show our framework can also be found at: https://youtu.be/QFV2fkPwvi0. | [
"Lai, Viet",
"Nguyen, Chien",
"Ngo, Nghia",
"Nguyen, Thuat",
"Dernoncourt, Franck",
"Rossi, Ryan",
"Nguyen, Thien"
] | Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback | emnlp-demo.28 | 2307.16039 | [
"https://github.com/nlp-uoregon/okapi"
] | https://huggingface.co/papers/2307.16039 | 3 | 3 | 0 | 7 | [] | [
"jon-tow/okapi_mmlu",
"jon-tow/okapi_truthfulqa",
"alvarobartt/hellaswag-okapi-eval-es",
"alvarobartt/mmlu-okapi-eval-es",
"OpenLLM-Ro/ro_mmlu",
"OpenLLM-Ro/ro_truthfulqa",
"OpenLLM-Ro/ro_arc_challenge",
"OpenLLM-Ro/ro_hellaswag",
"hynky/mmlu_okapi",
"jon-tow/okapi_hellaswag",
"jon-tow/okapi_arc_challenge",
"alvarobartt/arc-c-okapi-eval-es",
"alvarobartt/truthfulqa-okapi-eval-es",
"ZurichNLP/mlit-alpaca-eval",
"Hennara/arc_ar_challenge"
] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.29.bib | https://aclanthology.org/2023.emnlp-demo.29/ | @inproceedings{devare-etal-2023-sageviz,
title = "{SAGEV}iz: {S}chem{A} {GE}neration and Visualization",
author = "Devare, Sugam and
Koupaee, Mahnaz and
Gunapati, Gautham and
Ghosh, Sayontan and
Vallurupalli, Sai and
Lal, Yash Kumar and
Ferraro, Francis and
Chambers, Nathanael and
Durrett, Greg and
Mooney, Raymond and
Erk, Katrin and
Balasubramanian, Niranjan",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.29",
doi = "10.18653/v1/2023.emnlp-demo.29",
pages = "328--335",
abstract = "Schema induction involves creating a graph representation depicting how events unfold in a scenario. We present SAGEViz, an intuitive and modular tool that utilizes human-AI collaboration to create and update complex schema graphs efficiently, where multiple annotators (humans and models) can work simultaneously on a schema graph from any domain. The tool consists of two components: (1) a curation component powered by plug-and-play event language models to create and expand event sequences while human annotators validate and enrich the sequences to build complex hierarchical schemas, and (2) an easy-to-use visualization component to visualize schemas at varying levels of hierarchy. Using supervised and few-shot approaches, our event language models can continually predict relevant events starting from a seed event. We conduct a user study and show that users need less effort in terms of interaction steps with SAGEViz to generate schemas of better quality. We also include a video demonstrating the system.",
}
| Schema induction involves creating a graph representation depicting how events unfold in a scenario. We present SAGEViz, an intuitive and modular tool that utilizes human-AI collaboration to create and update complex schema graphs efficiently, where multiple annotators (humans and models) can work simultaneously on a schema graph from any domain. The tool consists of two components: (1) a curation component powered by plug-and-play event language models to create and expand event sequences while human annotators validate and enrich the sequences to build complex hierarchical schemas, and (2) an easy-to-use visualization component to visualize schemas at varying levels of hierarchy. Using supervised and few-shot approaches, our event language models can continually predict relevant events starting from a seed event. We conduct a user study and show that users need less effort in terms of interaction steps with SAGEViz to generate schemas of better quality. We also include a video demonstrating the system. | [
"Devare, Sugam",
"Koupaee, Mahnaz",
"Gunapati, Gautham",
"Ghosh, Sayontan",
"Vallurupalli, Sai",
"Lal, Yash Kumar",
"Ferraro, Francis",
"Chambers, Nathanael",
"Durrett, Greg",
"Mooney, Raymond",
"Erk, Katrin",
"Balasubramanian, Niranjan"
] | SAGEViz: SchemA GEneration and Visualization | emnlp-demo.29 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.30.bib | https://aclanthology.org/2023.emnlp-demo.30/ | @inproceedings{heineman-etal-2023-thresh,
title = "Thresh: A Unified, Customizable and Deployable Platform for Fine-Grained Text Evaluation",
author = "Heineman, David and
Dou, Yao and
Xu, Wei",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.30",
doi = "10.18653/v1/2023.emnlp-demo.30",
pages = "336--345",
abstract = "Fine-grained, span-level human evaluation has emerged as a reliable and robust method for evaluating text generation tasks such as summarization, simplification, machine translation and news generation, and the derived annotations have been useful for training automatic metrics and improving language models. However, existing annotation tools implemented for these evaluation frameworks lack the adaptability to be extended to different domains or languages, or modify annotation settings according to user needs; and, the absence of a unified annotated data format inhibits the research in multi-task learning. In this paper, we introduce Thresh, a unified, customizable and deployable platform for fine-grained evaluation. With a single YAML configuration file, users can build and test an annotation interface for any framework within minutes {--} all in one web browser window. To facilitate collaboration and sharing, Thresh provides a community hub that hosts a collection of fine-grained frameworks and corresponding annotations made and collected by the community, covering a wide range of NLP tasks. For deployment, Thresh offers multiple options for any scale of annotation projects from small manual inspections to large crowdsourcing ones. Additionally, we introduce a Python library to streamline the entire process from typology design and deployment to annotation processing. Thresh is publicly accessible at https://thresh.tools.",
}
| Fine-grained, span-level human evaluation has emerged as a reliable and robust method for evaluating text generation tasks such as summarization, simplification, machine translation and news generation, and the derived annotations have been useful for training automatic metrics and improving language models. However, existing annotation tools implemented for these evaluation frameworks lack the adaptability to be extended to different domains or languages, or modify annotation settings according to user needs; and, the absence of a unified annotated data format inhibits the research in multi-task learning. In this paper, we introduce Thresh, a unified, customizable and deployable platform for fine-grained evaluation. With a single YAML configuration file, users can build and test an annotation interface for any framework within minutes {--} all in one web browser window. To facilitate collaboration and sharing, Thresh provides a community hub that hosts a collection of fine-grained frameworks and corresponding annotations made and collected by the community, covering a wide range of NLP tasks. For deployment, Thresh offers multiple options for any scale of annotation projects from small manual inspections to large crowdsourcing ones. Additionally, we introduce a Python library to streamline the entire process from typology design and deployment to annotation processing. Thresh is publicly accessible at https://thresh.tools. | [
"Heineman, David",
"Dou, Yao",
"Xu, Wei"
] | Thresh: A Unified, Customizable and Deployable Platform for Fine-Grained Text Evaluation | emnlp-demo.30 | 2308.06953 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.31.bib | https://aclanthology.org/2023.emnlp-demo.31/ | @inproceedings{ma-etal-2023-insightpilot,
title = "{I}nsight{P}ilot: An {LLM}-Empowered Automated Data Exploration System",
author = "Ma, Pingchuan and
Ding, Rui and
Wang, Shuai and
Han, Shi and
Zhang, Dongmei",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.31",
doi = "10.18653/v1/2023.emnlp-demo.31",
pages = "346--352",
abstract = "Exploring data is crucial in data analysis, as it helps users understand and interpret the data more effectively. However, performing effective data exploration requires in-depth knowledge of the dataset, the user intent and expertise in data analysis techniques. Not being familiar with either can create obstacles that make the process time-consuming and overwhelming. To address this issue, we introduce InsightPilot, an LLM (Large Language Model)-based, automated data exploration system designed to simplify the data exploration process. InsightPilot features a set of carefully designed analysis actions that streamline the data exploration process. Given a natural language question, InsightPilot collaborates with the LLM to issue a sequence of analysis actions, explore the data and generate insights. We demonstrate the effectiveness of InsightPilot in a user study and a case study, showing how it can help users gain valuable insights from their datasets.",
}
| Exploring data is crucial in data analysis, as it helps users understand and interpret the data more effectively. However, performing effective data exploration requires in-depth knowledge of the dataset, the user intent and expertise in data analysis techniques. Not being familiar with either can create obstacles that make the process time-consuming and overwhelming. To address this issue, we introduce InsightPilot, an LLM (Large Language Model)-based, automated data exploration system designed to simplify the data exploration process. InsightPilot features a set of carefully designed analysis actions that streamline the data exploration process. Given a natural language question, InsightPilot collaborates with the LLM to issue a sequence of analysis actions, explore the data and generate insights. We demonstrate the effectiveness of InsightPilot in a user study and a case study, showing how it can help users gain valuable insights from their datasets. | [
"Ma, Pingchuan",
"Ding, Rui",
"Wang, Shuai",
"Han, Shi",
"Zhang, Dongmei"
] | InsightPilot: An LLM-Empowered Automated Data Exploration System | emnlp-demo.31 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.32.bib | https://aclanthology.org/2023.emnlp-demo.32/ | @inproceedings{stanojevic-sartran-2023-synjax,
title = "{S}yn{J}ax: Structured Probability Distributions for {JAX}",
author = "Stanojevi{\'c}, Milo{\v{s}} and
Sartran, Laurent",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.32",
doi = "10.18653/v1/2023.emnlp-demo.32",
pages = "353--364",
abstract = "The development of deep learning software libraries enabled significant progress in the field by allowing users to focus on modeling, while letting the library to take care of the tedious and time-consuming task of optimizing execution for modern hardware accelerators. However, this has benefited only particular types of deep learning models, such as Transformers, whose primitives map easily to the vectorized computation. The models that explicitly account for structured objects, such as trees and segmentations, did not benefit equally because they require custom algorithms that are difficult to implement in a vectorized form. SynJax directly addresses this problem by providing an efficient vectorized implementation of inference algorithms for structured distributions covering alignment, tagging, segmentation, constituency trees and spanning trees. This is done by exploiting the connection between algorithms for automatic differentiation and probabilistic inference. With SynJax we can build large-scale differentiable models that explicitly model structure in the data. The code is available at https://github.com/google-deepmind/synjax",
}
| The development of deep learning software libraries enabled significant progress in the field by allowing users to focus on modeling, while letting the library to take care of the tedious and time-consuming task of optimizing execution for modern hardware accelerators. However, this has benefited only particular types of deep learning models, such as Transformers, whose primitives map easily to the vectorized computation. The models that explicitly account for structured objects, such as trees and segmentations, did not benefit equally because they require custom algorithms that are difficult to implement in a vectorized form. SynJax directly addresses this problem by providing an efficient vectorized implementation of inference algorithms for structured distributions covering alignment, tagging, segmentation, constituency trees and spanning trees. This is done by exploiting the connection between algorithms for automatic differentiation and probabilistic inference. With SynJax we can build large-scale differentiable models that explicitly model structure in the data. The code is available at https://github.com/google-deepmind/synjax | [
"Stanojevi{\\'c}, Milo{\\v{s}}",
"Sartran, Laurent"
] | SynJax: Structured Probability Distributions for JAX | emnlp-demo.32 | 2308.03291 | [
"https://github.com/stanojevic/fast-mst-algorithm"
] | https://huggingface.co/papers/2308.03291 | 1 | 5 | 0 | 2 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.33.bib | https://aclanthology.org/2023.emnlp-demo.33/ | @inproceedings{nguyen-etal-2023-resin,
title = "{RESIN}-{EDITOR}: A Schema-guided Hierarchical Event Graph Visualizer and Editor",
author = "Nguyen, Khanh Duy and
Zhang, Zixuan and
Suchocki, Reece and
Li, Sha and
Palmer, Martha and
Brown, Susan Windisch and
Han, Jiawei and
Ji, Heng",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.33",
doi = "10.18653/v1/2023.emnlp-demo.33",
pages = "365--372",
abstract = "In this paper, we present RESIN-EDITOR, an interactive event graph visualizer and editor designed for analyzing complex events. Our RESIN-EDITOR system allows users to render and freely edit hierarchical event graphs extracted from multimedia and multi-document news clusters with guidance from human-curated event schemas. RESIN-EDITOR{'}s unique features include hierarchical graph visualization, comprehensive source tracing, and interactive user editing, which significantly outperforms existing Information Extraction (IE) visualization tools in both IE result analysis and general model improvements. In our evaluation of RESIN-EDITOR, we demonstrate ways in which our tool is effective in understanding complex events and enhancing system performances. The source code, a video demonstration, and a live website for RESIN-EDITOR have been made publicly available.",
}
| In this paper, we present RESIN-EDITOR, an interactive event graph visualizer and editor designed for analyzing complex events. Our RESIN-EDITOR system allows users to render and freely edit hierarchical event graphs extracted from multimedia and multi-document news clusters with guidance from human-curated event schemas. RESIN-EDITOR{'}s unique features include hierarchical graph visualization, comprehensive source tracing, and interactive user editing, which significantly outperforms existing Information Extraction (IE) visualization tools in both IE result analysis and general model improvements. In our evaluation of RESIN-EDITOR, we demonstrate ways in which our tool is effective in understanding complex events and enhancing system performances. The source code, a video demonstration, and a live website for RESIN-EDITOR have been made publicly available. | [
"Nguyen, Khanh Duy",
"Zhang, Zixuan",
"Suchocki, Reece",
"Li, Sha",
"Palmer, Martha",
"Brown, Susan Windisch",
"Han, Jiawei",
"Ji, Heng"
] | RESIN-EDITOR: A Schema-guided Hierarchical Event Graph Visualizer and Editor | emnlp-demo.33 | 2312.03093 | [
"https://github.com/blender-nlp/resin-editor"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.34.bib | https://aclanthology.org/2023.emnlp-demo.34/ | @inproceedings{hajialigol-etal-2023-drgcoder,
title = "{DRGC}oder: Explainable Clinical Coding for the Early Prediction of Diagnostic-Related Groups",
author = "Hajialigol, Daniel and
Kaknes, Derek and
Barbour, Tanner and
Yao, Daphne and
North, Chris and
Sun, Jimeng and
Liem, David and
Wang, Xuan",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.34",
doi = "10.18653/v1/2023.emnlp-demo.34",
pages = "373--380",
abstract = "Medical claim coding is the process of transforming medical records, usually presented as free texts written by clinicians, or discharge summaries, into structured codes in a classification system such as ICD-10 (International Classification of Diseases, Tenth Revision) or DRG (Diagnosis-Related Group) codes. This process is essential for medical billing and transitional care; however, manual coding is time-consuming, error-prone, and expensive. To solve these issues, we propose DRGCoder, an explainability-enhanced clinical claim coding system for the early prediction of medical severity DRGs (MS-DRGs), a classification system that categorizes patients{'} hospital stays into various DRG groups based on the severity of illness and mortality risk. The DRGCoder framework introduces a novel multi-task Transformer model for MS-DRG prediction, modeling both the DRG labels of the discharge summaries and the important, or salient words within he discharge summaries. We allow users to inspect DRGCoder{'}s reasoning by visualizing the weights for each word of the input. Additionally, DRGCoder allows users to identify diseases within discharge summaries and compare across multiple discharge summaries. Our demo is available at https://huggingface.co/spaces/danielhajialigol/DRGCoder. A video demonstrating the demo can be found at https://www.youtube.com/watch?v=pcdiG6VwqlA",
}
| Medical claim coding is the process of transforming medical records, usually presented as free texts written by clinicians, or discharge summaries, into structured codes in a classification system such as ICD-10 (International Classification of Diseases, Tenth Revision) or DRG (Diagnosis-Related Group) codes. This process is essential for medical billing and transitional care; however, manual coding is time-consuming, error-prone, and expensive. To solve these issues, we propose DRGCoder, an explainability-enhanced clinical claim coding system for the early prediction of medical severity DRGs (MS-DRGs), a classification system that categorizes patients{'} hospital stays into various DRG groups based on the severity of illness and mortality risk. The DRGCoder framework introduces a novel multi-task Transformer model for MS-DRG prediction, modeling both the DRG labels of the discharge summaries and the important, or salient words within he discharge summaries. We allow users to inspect DRGCoder{'}s reasoning by visualizing the weights for each word of the input. Additionally, DRGCoder allows users to identify diseases within discharge summaries and compare across multiple discharge summaries. Our demo is available at https://huggingface.co/spaces/danielhajialigol/DRGCoder. A video demonstrating the demo can be found at https://www.youtube.com/watch?v=pcdiG6VwqlA | [
"Hajialigol, Daniel",
"Kaknes, Derek",
"Barbour, Tanner",
"Yao, Daphne",
"North, Chris",
"Sun, Jimeng",
"Liem, David",
"Wang, Xuan"
] | DRGCoder: Explainable Clinical Coding for the Early Prediction of Diagnostic-Related Groups | emnlp-demo.34 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.35.bib | https://aclanthology.org/2023.emnlp-demo.35/ | @inproceedings{cai-etal-2023-camra,
title = "{CAMRA}: Copilot for {AMR} Annotation",
author = "Cai, Jon and
Ahmed, Shafiuddin Rehan and
Bonn, Julia and
Wright-Bettner, Kristin and
Palmer, Martha and
Martin, James H.",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.35",
doi = "10.18653/v1/2023.emnlp-demo.35",
pages = "381--388",
abstract = "In this paper, we introduce CAMRA (Copilot for AMR Annotatations), a cutting-edge web-based tool designed for constructing Abstract Meaning Representation (AMR) from natural language text. CAMRA offers a novel approach to deep lexical semantics annotation such as AMR, treating AMR annotation akin to coding in programming languages. Leveraging the familiarity of programming paradigms, CAMRA encompasses all essential features of existing AMR editors, including example lookup, while going a step further by integrating Propbank roleset lookup as an autocomplete feature within the tool. Notably, CAMRA incorporates AMR parser models as coding co-pilots, greatly enhancing the efficiency and accuracy of AMR annotators.",
}
| In this paper, we introduce CAMRA (Copilot for AMR Annotatations), a cutting-edge web-based tool designed for constructing Abstract Meaning Representation (AMR) from natural language text. CAMRA offers a novel approach to deep lexical semantics annotation such as AMR, treating AMR annotation akin to coding in programming languages. Leveraging the familiarity of programming paradigms, CAMRA encompasses all essential features of existing AMR editors, including example lookup, while going a step further by integrating Propbank roleset lookup as an autocomplete feature within the tool. Notably, CAMRA incorporates AMR parser models as coding co-pilots, greatly enhancing the efficiency and accuracy of AMR annotators. | [
"Cai, Jon",
"Ahmed, Shafiuddin Rehan",
"Bonn, Julia",
"Wright-Bettner, Kristin",
"Palmer, Martha",
"Martin, James H."
] | CAMRA: Copilot for AMR Annotation | emnlp-demo.35 | 2311.10928 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.36.bib | https://aclanthology.org/2023.emnlp-demo.36/ | @inproceedings{zhong-etal-2023-reaction,
title = "Reaction Miner: An Integrated System for Chemical Reaction Extraction from Textual Data",
author = "Zhong, Ming and
Ouyang, Siru and
Jiao, Yizhu and
Kargupta, Priyanka and
Luo, Leo and
Shen, Yanzhen and
Zhou, Bobby and
Zhong, Xianrui and
Liu, Xuan and
Li, Hongxiang and
Xiao, Jinfeng and
Jiang, Minhao and
Hu, Vivian and
Wang, Xuan and
Ji, Heng and
Burke, Martin and
Zhao, Huimin and
Han, Jiawei",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.36",
doi = "10.18653/v1/2023.emnlp-demo.36",
pages = "389--402",
abstract = "Chemical reactions, as a core entity in the realm of chemistry, hold crucial implications in diverse areas ranging from hands-on laboratory research to advanced computational drug design. Despite a burgeoning interest in employing NLP techniques to extract these reactions, aligning this task with the real-world requirements of chemistry practitioners remains an ongoing challenge. In this paper, we present Reaction Miner, a system specifically designed to interact with raw scientific literature, delivering precise and more informative chemical reactions. Going beyond mere extraction, Reaction Miner integrates a holistic workflow: it accepts PDF files as input, bypassing the need for pre-processing and bolstering user accessibility. Subsequently, a text segmentation module ensures that the refined text encapsulates complete chemical reactions, augmenting the accuracy of extraction. Moreover, Reaction Miner broadens the scope of existing pre-defined reaction roles, including vital attributes previously neglected, thereby offering a more comprehensive depiction of chemical reactions. Evaluations conducted by chemistry domain users highlight the efficacy of each module in our system, demonstrating Reaction Miner as a powerful tool in this field.",
}
| Chemical reactions, as a core entity in the realm of chemistry, hold crucial implications in diverse areas ranging from hands-on laboratory research to advanced computational drug design. Despite a burgeoning interest in employing NLP techniques to extract these reactions, aligning this task with the real-world requirements of chemistry practitioners remains an ongoing challenge. In this paper, we present Reaction Miner, a system specifically designed to interact with raw scientific literature, delivering precise and more informative chemical reactions. Going beyond mere extraction, Reaction Miner integrates a holistic workflow: it accepts PDF files as input, bypassing the need for pre-processing and bolstering user accessibility. Subsequently, a text segmentation module ensures that the refined text encapsulates complete chemical reactions, augmenting the accuracy of extraction. Moreover, Reaction Miner broadens the scope of existing pre-defined reaction roles, including vital attributes previously neglected, thereby offering a more comprehensive depiction of chemical reactions. Evaluations conducted by chemistry domain users highlight the efficacy of each module in our system, demonstrating Reaction Miner as a powerful tool in this field. | [
"Zhong, Ming",
"Ouyang, Siru",
"Jiao, Yizhu",
"Kargupta, Priyanka",
"Luo, Leo",
"Shen, Yanzhen",
"Zhou, Bobby",
"Zhong, Xianrui",
"Liu, Xuan",
"Li, Hongxiang",
"Xiao, Jinfeng",
"Jiang, Minhao",
"Hu, Vivian",
"Wang, Xuan",
"Ji, Heng",
"Burke, Martin",
"Zhao, Huimin",
"Han, Jiawei"
] | Reaction Miner: An Integrated System for Chemical Reaction Extraction from Textual Data | emnlp-demo.36 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.37.bib | https://aclanthology.org/2023.emnlp-demo.37/ | @inproceedings{cattan-etal-2023-champ,
title = "{CHAMP}: Efficient Annotation and Consolidation of Cluster Hierarchies",
author = "Cattan, Arie and
Hope, Tom and
Downey, Doug and
Bar-Haim, Roy and
Eden, Lilach and
Kantor, Yoav and
Dagan, Ido",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.37",
doi = "10.18653/v1/2023.emnlp-demo.37",
pages = "403--412",
abstract = "Various NLP tasks require a complex hierarchical structure over nodes, where each node is a cluster of items. Examples include generating entailment graphs, hierarchical cross-document coreference resolution, annotating event and subevent relations, etc. To enable efficient annotation of such hierarchical structures, we release CHAMP, an open source tool allowing to incrementally construct both clusters and hierarchy simultaneously over any type of texts. This incremental approach significantly reduces annotation time compared to the common pairwise annotation approach and also guarantees maintaining transitivity at the cluster and hierarchy levels. Furthermore, CHAMP includes a consolidation mode, where an adjudicator can easily compare multiple cluster hierarchy annotations and resolve disagreements.",
}
| Various NLP tasks require a complex hierarchical structure over nodes, where each node is a cluster of items. Examples include generating entailment graphs, hierarchical cross-document coreference resolution, annotating event and subevent relations, etc. To enable efficient annotation of such hierarchical structures, we release CHAMP, an open source tool allowing to incrementally construct both clusters and hierarchy simultaneously over any type of texts. This incremental approach significantly reduces annotation time compared to the common pairwise annotation approach and also guarantees maintaining transitivity at the cluster and hierarchy levels. Furthermore, CHAMP includes a consolidation mode, where an adjudicator can easily compare multiple cluster hierarchy annotations and resolve disagreements. | [
"Cattan, Arie",
"Hope, Tom",
"Downey, Doug",
"Bar-Haim, Roy",
"Eden, Lilach",
"Kantor, Yoav",
"Dagan, Ido"
] | CHAMP: Efficient Annotation and Consolidation of Cluster Hierarchies | emnlp-demo.37 | 2311.11301 | [
"https://github.com/ariecattan/champ"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.38.bib | https://aclanthology.org/2023.emnlp-demo.38/ | @inproceedings{viswanathan-etal-2023-prompt2model,
title = "{P}rompt2{M}odel: Generating Deployable Models from Natural Language Instructions",
author = "Viswanathan, Vijay and
Zhao, Chenyang and
Bertsch, Amanda and
Wu, Tongshuang and
Neubig, Graham",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.38",
doi = "10.18653/v1/2023.emnlp-demo.38",
pages = "413--421",
abstract = "Large language models (LLMs) enable system builders today to create competent NLP systems through prompting, where they only need to describe the task in natural language and provide a few examples. However, in other ways, LLMs are a step backward from traditional special-purpose NLP models; they require extensive computational resources for deployment and can be gated behind APIs. In this paper, we propose Prompt2Model, a general-purpose method that takes a natural language task description like the prompts provided to LLMs, and uses it to train a special-purpose model that is conducive to deployment. This is done through a multi-step process of retrieval of existing datasets and pretrained models, dataset generation using LLMs, and supervised fine-tuning on these retrieved and generated datasets. Over three tasks, we demonstrate that given the same few-shot prompt as input, Prompt2Model trains models that outperform the results of a strong LLM, gpt-3.5-turbo, by an average of 20{\%} while being up to 700 times smaller. We also show that this data can be used to obtain reliable performance estimates of model performance, enabling model developers to assess model reliability before deployment. Prompt2Model is available open-source at https://github.com/neulab/prompt2model. Our demo video is posted at \url{youtu.be/LYYQ_EhGd-Q}.",
}
| Large language models (LLMs) enable system builders today to create competent NLP systems through prompting, where they only need to describe the task in natural language and provide a few examples. However, in other ways, LLMs are a step backward from traditional special-purpose NLP models; they require extensive computational resources for deployment and can be gated behind APIs. In this paper, we propose Prompt2Model, a general-purpose method that takes a natural language task description like the prompts provided to LLMs, and uses it to train a special-purpose model that is conducive to deployment. This is done through a multi-step process of retrieval of existing datasets and pretrained models, dataset generation using LLMs, and supervised fine-tuning on these retrieved and generated datasets. Over three tasks, we demonstrate that given the same few-shot prompt as input, Prompt2Model trains models that outperform the results of a strong LLM, gpt-3.5-turbo, by an average of 20{\%} while being up to 700 times smaller. We also show that this data can be used to obtain reliable performance estimates of model performance, enabling model developers to assess model reliability before deployment. Prompt2Model is available open-source at https://github.com/neulab/prompt2model. Our demo video is posted at \url{youtu.be/LYYQ_EhGd-Q}. | [
"Viswanathan, Vijay",
"Zhao, Chenyang",
"Bertsch, Am",
"a",
"Wu, Tongshuang",
"Neubig, Graham"
] | Prompt2Model: Generating Deployable Models from Natural Language Instructions | emnlp-demo.38 | 2308.12261 | [
"https://github.com/neulab/prompt2model"
] | https://huggingface.co/papers/2308.12261 | 0 | 1 | 0 | 5 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.39.bib | https://aclanthology.org/2023.emnlp-demo.39/ | @inproceedings{milbauer-etal-2023-newssense,
title = "{N}ews{S}ense: Reference-free Verification via Cross-document Comparison",
author = "Milbauer, Jeremiah and
Ding, Ziqi and
Wu, Zhijin and
Wu, Tongshuang",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.39",
doi = "10.18653/v1/2023.emnlp-demo.39",
pages = "422--430",
abstract = "We present NewsSense, a novel sensemaking tool and reading interface designed to collect and integrate information from multiple news articles on a central topic. NewsSense provides {``}reference-free verification,{''} augmenting a central grounding article of the user{'}s choice by: (1) linking to related articles from different sources; and (2) providing inline highlights on how specific claims are either supported or contradicted by information from other articles. Using NewsSense, users can seamlessly digest and cross-check multiple information sources without disturbing their natural reading flow. Our pilot study shows that NewsSense has the potential to help users identify key information, verify the credibility of news articles, explore different perspectives, and understand what content is supported, contradicted, or missing.",
}
| We present NewsSense, a novel sensemaking tool and reading interface designed to collect and integrate information from multiple news articles on a central topic. NewsSense provides {``}reference-free verification,{''} augmenting a central grounding article of the user{'}s choice by: (1) linking to related articles from different sources; and (2) providing inline highlights on how specific claims are either supported or contradicted by information from other articles. Using NewsSense, users can seamlessly digest and cross-check multiple information sources without disturbing their natural reading flow. Our pilot study shows that NewsSense has the potential to help users identify key information, verify the credibility of news articles, explore different perspectives, and understand what content is supported, contradicted, or missing. | [
"Milbauer, Jeremiah",
"Ding, Ziqi",
"Wu, Zhijin",
"Wu, Tongshuang"
] | NewsSense: Reference-free Verification via Cross-document Comparison | emnlp-demo.39 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.40.bib | https://aclanthology.org/2023.emnlp-demo.40/ | @inproceedings{rebedea-etal-2023-nemo,
title = "{N}e{M}o Guardrails: A Toolkit for Controllable and Safe {LLM} Applications with Programmable Rails",
author = "Rebedea, Traian and
Dinu, Razvan and
Sreedhar, Makesh Narsimhan and
Parisien, Christopher and
Cohen, Jonathan",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.40",
doi = "10.18653/v1/2023.emnlp-demo.40",
pages = "431--445",
abstract = "NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems. Guardrails (or rails for short) are a specific way of controlling the output of an LLM, such as not talking about topics considered harmful, following a predefined dialogue path, using a particular language style, and more. There are several mechanisms that allow LLM providers and developers to add guardrails that are embedded into a specific model at training, e.g. using model alignment. Using a runtime inspired from dialogue management, NeMo Guardrails provides a different approach by allowing developers to add programmable rails to LLM applications - these are user-defined, independent of the underlying LLM, and interpretable. Our initial results show that the proposed approach can be used with several LLM providers to develop controllable and safe LLM applications using programmable rails.",
}
| NeMo Guardrails is an open-source toolkit for easily adding programmable guardrails to LLM-based conversational systems. Guardrails (or rails for short) are a specific way of controlling the output of an LLM, such as not talking about topics considered harmful, following a predefined dialogue path, using a particular language style, and more. There are several mechanisms that allow LLM providers and developers to add guardrails that are embedded into a specific model at training, e.g. using model alignment. Using a runtime inspired from dialogue management, NeMo Guardrails provides a different approach by allowing developers to add programmable rails to LLM applications - these are user-defined, independent of the underlying LLM, and interpretable. Our initial results show that the proposed approach can be used with several LLM providers to develop controllable and safe LLM applications using programmable rails. | [
"Rebedea, Traian",
"Dinu, Razvan",
"Sreedhar, Makesh Narsimhan",
"Parisien, Christopher",
"Cohen, Jonathan"
] | NeMo Guardrails: A Toolkit for Controllable and Safe LLM Applications with Programmable Rails | emnlp-demo.40 | 2310.10501 | [
"https://github.com/nvidia/nemo-guardrails"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.41.bib | https://aclanthology.org/2023.emnlp-demo.41/ | @inproceedings{fadeeva-etal-2023-lm,
title = "{LM}-Polygraph: Uncertainty Estimation for Language Models",
author = "Fadeeva, Ekaterina and
Vashurin, Roman and
Tsvigun, Akim and
Vazhentsev, Artem and
Petrakov, Sergey and
Fedyanin, Kirill and
Vasilev, Daniil and
Goncharova, Elizaveta and
Panchenko, Alexander and
Panov, Maxim and
Baldwin, Timothy and
Shelmanov, Artem",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.41",
doi = "10.18653/v1/2023.emnlp-demo.41",
pages = "446--461",
abstract = "Recent advancements in the capabilities of large language models (LLMs) have paved the way for a myriad of groundbreaking applications in various fields. However, a significant challenge arises as these models often {``}hallucinate{''}, i.e., fabricate facts without providing users an apparent means to discern the veracity of their statements. Uncertainty estimation (UE) methods are one path to safer, more responsible, and more effective use of LLMs. However, to date, research on UE methods for LLMs has been focused primarily on theoretical rather than engineering contributions. In this work, we tackle this issue by introducing LM-Polygraph, a framework with implementations of a battery of state-of-the-art UE methods for LLMs in text generation tasks, with unified program interfaces in Python. Additionally, it introduces an extendable benchmark for consistent evaluation of UE techniques by researchers, and a demo web application that enriches the standard chat dialog with confidence scores, empowering end-users to discern unreliable responses. LM-Polygraph is compatible with the most recent LLMs, including BLOOMz, LLaMA-2, ChatGPT, and GPT-4, and is designed to support future releases of similarly-styled LMs.",
}
| Recent advancements in the capabilities of large language models (LLMs) have paved the way for a myriad of groundbreaking applications in various fields. However, a significant challenge arises as these models often {``}hallucinate{''}, i.e., fabricate facts without providing users an apparent means to discern the veracity of their statements. Uncertainty estimation (UE) methods are one path to safer, more responsible, and more effective use of LLMs. However, to date, research on UE methods for LLMs has been focused primarily on theoretical rather than engineering contributions. In this work, we tackle this issue by introducing LM-Polygraph, a framework with implementations of a battery of state-of-the-art UE methods for LLMs in text generation tasks, with unified program interfaces in Python. Additionally, it introduces an extendable benchmark for consistent evaluation of UE techniques by researchers, and a demo web application that enriches the standard chat dialog with confidence scores, empowering end-users to discern unreliable responses. LM-Polygraph is compatible with the most recent LLMs, including BLOOMz, LLaMA-2, ChatGPT, and GPT-4, and is designed to support future releases of similarly-styled LMs. | [
"Fadeeva, Ekaterina",
"Vashurin, Roman",
"Tsvigun, Akim",
"Vazhentsev, Artem",
"Petrakov, Sergey",
"Fedyanin, Kirill",
"Vasilev, Daniil",
"Goncharova, Elizaveta",
"Panchenko, Alex",
"er",
"Panov, Maxim",
"Baldwin, Timothy",
"Shelmanov, Artem"
] | LM-Polygraph: Uncertainty Estimation for Language Models | emnlp-demo.41 | 2311.07383 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.42.bib | https://aclanthology.org/2023.emnlp-demo.42/ | @inproceedings{zhu-etal-2023-descriptive,
title = "Descriptive Knowledge Graph in Biomedical Domain",
author = "Zhu, Kerui and
Huang, Jie and
Chang, Kevin Chen-Chuan",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.42",
doi = "10.18653/v1/2023.emnlp-demo.42",
pages = "462--470",
abstract = "We present a novel system that automatically extracts and generates informative and descriptive sentences from the biomedical corpus and facilitates the efficient search for relational knowledge. Unlike previous search engines or exploration systems that retrieve unconnected passages, our system organizes descriptive sentences as a relational graph, enabling researchers to explore closely related biomedical entities (e.g., diseases treated by a chemical) or indirectly connected entities (e.g., potential drugs for treating a disease). Our system also uses ChatGPT and a fine-tuned relation synthesis model to generate concise and reliable descriptive sentences from retrieved information, reducing the need for extensive human reading effort. With our system, researchers can easily obtain both high-level knowledge and detailed references and interactively steer to the information of interest. We spotlight the application of our system in COVID-19 research, illustrating its utility in areas such as drug repurposing and literature curation.",
}
| We present a novel system that automatically extracts and generates informative and descriptive sentences from the biomedical corpus and facilitates the efficient search for relational knowledge. Unlike previous search engines or exploration systems that retrieve unconnected passages, our system organizes descriptive sentences as a relational graph, enabling researchers to explore closely related biomedical entities (e.g., diseases treated by a chemical) or indirectly connected entities (e.g., potential drugs for treating a disease). Our system also uses ChatGPT and a fine-tuned relation synthesis model to generate concise and reliable descriptive sentences from retrieved information, reducing the need for extensive human reading effort. With our system, researchers can easily obtain both high-level knowledge and detailed references and interactively steer to the information of interest. We spotlight the application of our system in COVID-19 research, illustrating its utility in areas such as drug repurposing and literature curation. | [
"Zhu, Kerui",
"Huang, Jie",
"Chang, Kevin Chen-Chuan"
] | Descriptive Knowledge Graph in Biomedical Domain | emnlp-demo.42 | 2310.11681 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.43.bib | https://aclanthology.org/2023.emnlp-demo.43/ | @inproceedings{sucik-etal-2023-prompterator,
title = "Prompterator: Iterate Efficiently towards More Effective Prompts",
author = "Su{\v{c}}ik, Samuel and
Skala, Daniel and
{\v{S}}vec, Andrej and
Hra{\v{s}}ka, Peter and
{\v{S}}uppa, Marek",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.43",
doi = "10.18653/v1/2023.emnlp-demo.43",
pages = "471--478",
abstract = "With the advent of Large Language Models (LLMs) the process known as prompting, which entices the LLM to solve an arbitrary language processing task without the need for finetuning, has risen to prominence. Finding well-performing prompts, however, is a non-trivial task which requires experimentation in order to arrive at a prompt that solves a specific task. When a given task does not readily reduce to one that can be easily measured with well established metrics, human evaluation of the results obtained by prompting is often necessary. In this work we present prompterator, a tool that helps the user interactively iterate over various potential prompts and choose the best performing one based on human feedback. It is distributed as an open source package with out-of-the-box support for various LLM providers and was designed to be easily extensible.",
}
| With the advent of Large Language Models (LLMs) the process known as prompting, which entices the LLM to solve an arbitrary language processing task without the need for finetuning, has risen to prominence. Finding well-performing prompts, however, is a non-trivial task which requires experimentation in order to arrive at a prompt that solves a specific task. When a given task does not readily reduce to one that can be easily measured with well established metrics, human evaluation of the results obtained by prompting is often necessary. In this work we present prompterator, a tool that helps the user interactively iterate over various potential prompts and choose the best performing one based on human feedback. It is distributed as an open source package with out-of-the-box support for various LLM providers and was designed to be easily extensible. | [
"Su{\\v{c}}ik, Samuel",
"Skala, Daniel",
"{\\v{S}}vec, Andrej",
"Hra{\\v{s}}ka, Peter",
"{\\v{S}}uppa, Marek"
] | Prompterator: Iterate Efficiently towards More Effective Prompts | emnlp-demo.43 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.44.bib | https://aclanthology.org/2023.emnlp-demo.44/ | @inproceedings{zhang-etal-2023-zhujiu,
title = "{Z}hu{J}iu: A Multi-dimensional, Multi-faceted {C}hinese Benchmark for Large Language Models",
author = "Zhang, Baoli and
Xie, Haining and
Du, Pengfan and
Chen, Junhao and
Cao, Pengfei and
Chen, Yubo and
Liu, Shengping and
Liu, Kang and
Zhao, Jun",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.44",
doi = "10.18653/v1/2023.emnlp-demo.44",
pages = "479--494",
abstract = "The unprecedented performance of LLMs requires comprehensive and accurate evaluation. We argue that for LLMs evaluation, benchmarks need to be comprehensive and systematic. To this end, we propose the Zhujiu benchmark, which has the following strengths: (1) Multi-dimensional ability coverage: We comprehensively evaluate LLMs across 7 ability dimensions covering 51 tasks. Especially, we also propose a new benchmark that focus on knowledge ability of LLMs. (2) Multi-faceted evaluation methods collaboration: We use 3 different yet complementary evaluation methods to comprehensively evaluate LLMs, which can ensure the authority and accuracy of the evaluation results. (3) Comprehensive Chinese benchmark: ZhuJiu is the pioneering benchmark that fully assesses LLMs in Chinese, while also providing equally robust evaluation abilities in English. (4) Avoiding potential data leakage: To avoid data leakage, we construct evaluation data specifically for 37 tasks. We evaluate 10 current mainstream LLMs, and conduct an in-depth discussion and analysis of their results. The ZhuJiu benchmark and open-participation leaderboard are publicly released at \url{http://www.zhujiu-benchmark.com} and we also provide a demo video at \url{https://youtu.be/qypkJ89L1Ic.}",
}
| The unprecedented performance of LLMs requires comprehensive and accurate evaluation. We argue that for LLMs evaluation, benchmarks need to be comprehensive and systematic. To this end, we propose the Zhujiu benchmark, which has the following strengths: (1) Multi-dimensional ability coverage: We comprehensively evaluate LLMs across 7 ability dimensions covering 51 tasks. Especially, we also propose a new benchmark that focus on knowledge ability of LLMs. (2) Multi-faceted evaluation methods collaboration: We use 3 different yet complementary evaluation methods to comprehensively evaluate LLMs, which can ensure the authority and accuracy of the evaluation results. (3) Comprehensive Chinese benchmark: ZhuJiu is the pioneering benchmark that fully assesses LLMs in Chinese, while also providing equally robust evaluation abilities in English. (4) Avoiding potential data leakage: To avoid data leakage, we construct evaluation data specifically for 37 tasks. We evaluate 10 current mainstream LLMs, and conduct an in-depth discussion and analysis of their results. The ZhuJiu benchmark and open-participation leaderboard are publicly released at \url{http://www.zhujiu-benchmark.com} and we also provide a demo video at \url{https://youtu.be/qypkJ89L1Ic.} | [
"Zhang, Baoli",
"Xie, Haining",
"Du, Pengfan",
"Chen, Junhao",
"Cao, Pengfei",
"Chen, Yubo",
"Liu, Shengping",
"Liu, Kang",
"Zhao, Jun"
] | ZhuJiu: A Multi-dimensional, Multi-faceted Chinese Benchmark for Large Language Models | emnlp-demo.44 | 2308.14353 | [
""
] | https://huggingface.co/papers/2308.14353 | 0 | 0 | 0 | 9 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.45.bib | https://aclanthology.org/2023.emnlp-demo.45/ | @inproceedings{lo-etal-2023-papermage,
title = "{P}aper{M}age: A Unified Toolkit for Processing, Representing, and Manipulating Visually-Rich Scientific Documents",
author = "Lo, Kyle and
Shen, Zejiang and
Newman, Benjamin and
Chang, Joseph and
Authur, Russell and
Bransom, Erin and
Candra, Stefan and
Chandrasekhar, Yoganand and
Huff, Regan and
Kuehl, Bailey and
Singh, Amanpreet and
Wilhelm, Chris and
Zamarron, Angele and
Hearst, Marti A. and
Weld, Daniel and
Downey, Doug and
Soldaini, Luca",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.45",
doi = "10.18653/v1/2023.emnlp-demo.45",
pages = "495--507",
abstract = "Despite growing interest in applying natural language processing (NLP) and computer vision (CV) models to the scholarly domain, scientific documents remain challenging to work with. They{'}re often in difficult-to-use PDF formats, and the ecosystem of models to process them is fragmented and incomplete. We introduce PaperMage, an open-source Python toolkit for analyzing and processing visually-rich, structured scientific documents. PaperMage offers clean and intuitive abstractions for seamlessly representing and manipulating both textual and visual document elements. PaperMage achieves this by integrating disparate state-of-the-art NLP and CV models into a unified framework, and provides turn-key recipes for common scientific document processing use-cases. PaperMage has powered multiple research prototypes of AI applications over scientific documents, along with Semantic Scholar{'}s large-scale production system for processing millions of PDFs. GitHub: https://github.com/allenai/papermage",
}
| Despite growing interest in applying natural language processing (NLP) and computer vision (CV) models to the scholarly domain, scientific documents remain challenging to work with. They{'}re often in difficult-to-use PDF formats, and the ecosystem of models to process them is fragmented and incomplete. We introduce PaperMage, an open-source Python toolkit for analyzing and processing visually-rich, structured scientific documents. PaperMage offers clean and intuitive abstractions for seamlessly representing and manipulating both textual and visual document elements. PaperMage achieves this by integrating disparate state-of-the-art NLP and CV models into a unified framework, and provides turn-key recipes for common scientific document processing use-cases. PaperMage has powered multiple research prototypes of AI applications over scientific documents, along with Semantic Scholar{'}s large-scale production system for processing millions of PDFs. GitHub: https://github.com/allenai/papermage | [
"Lo, Kyle",
"Shen, Zejiang",
"Newman, Benjamin",
"Chang, Joseph",
"Authur, Russell",
"Bransom, Erin",
"C",
"ra, Stefan",
"Ch",
"rasekhar, Yogan",
"",
"Huff, Regan",
"Kuehl, Bailey",
"Singh, Amanpreet",
"Wilhelm, Chris",
"Zamarron, Angele",
"Hearst, Marti A.",
"Weld, Daniel",
"Downey, Doug",
"Soldaini, Luca"
] | PaperMage: A Unified Toolkit for Processing, Representing, and Manipulating Visually-Rich Scientific Documents | emnlp-demo.45 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-demo.46.bib | https://aclanthology.org/2023.emnlp-demo.46/ | @inproceedings{peng-etal-2023-omnievent,
title = "{O}mni{E}vent: A Comprehensive, Fair, and Easy-to-Use Toolkit for Event Understanding",
author = "Peng, Hao and
Wang, Xiaozhi and
Yao, Feng and
Wang, Zimu and
Zhu, Chuzhao and
Zeng, Kaisheng and
Hou, Lei and
Li, Juanzi",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.46",
doi = "10.18653/v1/2023.emnlp-demo.46",
pages = "508--517",
abstract = "Event understanding aims at understanding the content and relationship of events within texts, which covers multiple complicated information extraction tasks: event detection, event argument extraction, and event relation extraction. To facilitate related research and application, we present an event understanding toolkit OmniEvent, which features three desiderata: (1) Comprehensive. OmniEvent supports mainstream modeling paradigms of all the event understanding tasks and the processing of 15 widely-used English and Chinese datasets. (2) Fair. OmniEvent carefully handles the inconspicuous evaluation pitfalls reported in Peng et al. (2023), which ensures fair comparisons between different models. (3) Easy-to-use. OmniEvent is designed to be easily used by users with varying needs. We provide off-the-shelf models that can be directly deployed as web services. The modular framework also enables users to easily implement and evaluate new event understanding models with OmniEvent. The toolkit is publicly released along with the demonstration website and video.",
}
| Event understanding aims at understanding the content and relationship of events within texts, which covers multiple complicated information extraction tasks: event detection, event argument extraction, and event relation extraction. To facilitate related research and application, we present an event understanding toolkit OmniEvent, which features three desiderata: (1) Comprehensive. OmniEvent supports mainstream modeling paradigms of all the event understanding tasks and the processing of 15 widely-used English and Chinese datasets. (2) Fair. OmniEvent carefully handles the inconspicuous evaluation pitfalls reported in Peng et al. (2023), which ensures fair comparisons between different models. (3) Easy-to-use. OmniEvent is designed to be easily used by users with varying needs. We provide off-the-shelf models that can be directly deployed as web services. The modular framework also enables users to easily implement and evaluate new event understanding models with OmniEvent. The toolkit is publicly released along with the demonstration website and video. | [
"Peng, Hao",
"Wang, Xiaozhi",
"Yao, Feng",
"Wang, Zimu",
"Zhu, Chuzhao",
"Zeng, Kaisheng",
"Hou, Lei",
"Li, Juanzi"
] | OmniEvent: A Comprehensive, Fair, and Easy-to-Use Toolkit for Event Understanding | emnlp-demo.46 | 2309.14258 | [
"https://github.com/thu-keg/omnievent"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.47.bib | https://aclanthology.org/2023.emnlp-demo.47/ | @inproceedings{ding-etal-2023-cocoscisum,
title = "{C}oco{S}ci{S}um: A Scientific Summarization Toolkit with Compositional Controllability",
author = "Ding, Yixi and
Qin, Yanxia and
Liu, Qian and
Kan, Min-Yen",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.47",
doi = "10.18653/v1/2023.emnlp-demo.47",
pages = "518--526",
abstract = "We present a novel toolkit for controlled summarization of scientific documents, designed for the specific needs of the scientific community. Our system generates summaries based on user preferences, adjusting key attributes specifically of length and keyword inclusion. A distinguishing feature is its ability to manage multiple attributes concurrently, demonstrating Compositional Controllability for Scientific Summarization (CocoSciSum). Benchmarked against the strong Flan-T5 baseline, CocoSciSum exhibits superior performance on both the quality of summaries generated and the control over single and multiple attributes. Moreover, CocoSciSum is a user-centric toolkit, supporting user preferences expressed in natural language instructions, and accommodating diverse input document formats. CocoSciSum is available on GitHub (https://github.com/WING-NUS/SciAssist/tree/CocoSciSum) with an introduction video (https://youtu.be/YC1YDeEjAbQ).",
}
| We present a novel toolkit for controlled summarization of scientific documents, designed for the specific needs of the scientific community. Our system generates summaries based on user preferences, adjusting key attributes specifically of length and keyword inclusion. A distinguishing feature is its ability to manage multiple attributes concurrently, demonstrating Compositional Controllability for Scientific Summarization (CocoSciSum). Benchmarked against the strong Flan-T5 baseline, CocoSciSum exhibits superior performance on both the quality of summaries generated and the control over single and multiple attributes. Moreover, CocoSciSum is a user-centric toolkit, supporting user preferences expressed in natural language instructions, and accommodating diverse input document formats. CocoSciSum is available on GitHub (https://github.com/WING-NUS/SciAssist/tree/CocoSciSum) with an introduction video (https://youtu.be/YC1YDeEjAbQ). | [
"Ding, Yixi",
"Qin, Yanxia",
"Liu, Qian",
"Kan, Min-Yen"
] | CocoSciSum: A Scientific Summarization Toolkit with Compositional Controllability | emnlp-demo.47 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
Subsets and Splits