bibtex_url
stringlengths 41
53
| proceedings
stringlengths 38
50
| bibtext
stringlengths 528
3.02k
| abstract
stringlengths 17
2.35k
| authors
sequencelengths 1
44
| title
stringlengths 18
190
| id
stringlengths 7
19
| arxiv_id
stringlengths 0
10
| GitHub
sequencelengths 1
1
| paper_page
stringclasses 528
values | n_linked_authors
int64 -1
15
| upvotes
int64 -1
77
| num_comments
int64 -1
10
| n_authors
int64 -1
52
| Models
sequencelengths 0
100
| Datasets
sequencelengths 0
15
| Spaces
sequencelengths 0
46
| paper_page_exists_pre_conf
int64 0
1
| type
stringclasses 2
values |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
https://aclanthology.org/2023.emnlp-demo.48.bib | https://aclanthology.org/2023.emnlp-demo.48/ | @inproceedings{lv-etal-2023-collie,
title = "{C}o{LL}i{E}: Collaborative Training of Large Language Models in an Efficient Way",
author = "Lv, Kai and
Zhang, Shuo and
Gu, Tianle and
Xing, Shuhao and
Hong, Jiawei and
Chen, Keyu and
Liu, Xiaoran and
Yang, Yuqing and
Guo, Honglin and
Liu, Tengxiao and
Sun, Yu and
Guo, Qipeng and
Yan, Hang and
Qiu, Xipeng",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.48",
doi = "10.18653/v1/2023.emnlp-demo.48",
pages = "527--542",
abstract = "Large language models (LLMs) are increasingly pivotal in a wide range of natural language processing tasks. Access to pre-trained models, courtesy of the open-source community, has made it possible to adapt these models to specific applications for enhanced performance. However, the substantial resources required for training these models necessitate efficient solutions. This paper introduces CoLLiE, an efficient library that facilitates collaborative training of large language models using 3D parallelism, parameter-efficient fine-tuning (PEFT) methods, and optimizers such as Lion, Adan, Sophia, and LOMO. With its modular design and comprehensive functionality, CoLLiE offers a balanced blend of efficiency, ease of use, and customization. CoLLiE has proven superior training efficiency in comparison with prevalent solutions in pre-training and fine-tuning scenarios. Furthermore, we provide an empirical evaluation of the correlation between model size and GPU memory consumption under different optimization methods, as well as an analysis of the throughput. Lastly, we carry out a comprehensive comparison of various optimizers and PEFT methods within the instruction-tuning context. CoLLiE is available at https://github.com/OpenLMLab/collie.",
}
| Large language models (LLMs) are increasingly pivotal in a wide range of natural language processing tasks. Access to pre-trained models, courtesy of the open-source community, has made it possible to adapt these models to specific applications for enhanced performance. However, the substantial resources required for training these models necessitate efficient solutions. This paper introduces CoLLiE, an efficient library that facilitates collaborative training of large language models using 3D parallelism, parameter-efficient fine-tuning (PEFT) methods, and optimizers such as Lion, Adan, Sophia, and LOMO. With its modular design and comprehensive functionality, CoLLiE offers a balanced blend of efficiency, ease of use, and customization. CoLLiE has proven superior training efficiency in comparison with prevalent solutions in pre-training and fine-tuning scenarios. Furthermore, we provide an empirical evaluation of the correlation between model size and GPU memory consumption under different optimization methods, as well as an analysis of the throughput. Lastly, we carry out a comprehensive comparison of various optimizers and PEFT methods within the instruction-tuning context. CoLLiE is available at https://github.com/OpenLMLab/collie. | [
"Lv, Kai",
"Zhang, Shuo",
"Gu, Tianle",
"Xing, Shuhao",
"Hong, Jiawei",
"Chen, Keyu",
"Liu, Xiaoran",
"Yang, Yuqing",
"Guo, Honglin",
"Liu, Tengxiao",
"Sun, Yu",
"Guo, Qipeng",
"Yan, Hang",
"Qiu, Xipeng"
] | CoLLiE: Collaborative Training of Large Language Models in an Efficient Way | emnlp-demo.48 | 2312.00407 | [
"https://github.com/openlmlab/collie"
] | https://huggingface.co/papers/2312.00407 | 3 | 2 | 1 | 14 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.49.bib | https://aclanthology.org/2023.emnlp-demo.49/ | @inproceedings{zhang-etal-2023-video,
title = "Video-{LL}a{MA}: An Instruction-tuned Audio-Visual Language Model for Video Understanding",
author = "Zhang, Hang and
Li, Xin and
Bing, Lidong",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.49",
doi = "10.18653/v1/2023.emnlp-demo.49",
pages = "543--553",
abstract = "We present Video-LLaMA, a multi-modal framework that empowers Large Language Models (LLMs) with the capability of understanding both visual and auditory content in the video. Video-LLaMA bootstraps cross-modal training from the frozen pre-trained visual {\&} audio encoders and the frozen LLMs. Unlike previous works that complement LLMs to process the visual or audio signals only, Video-LLaMA enables video comprehension by tackling two challenges: (1) capturing the temporal changes in visual scenes, (2) integrating audio-visual signals. To counter the first challenge, we propose a Video Q-former to assemble a pre-trained image encoder into our video encoder and introduce a video-to-text generation task to learn video-language correspondence. For the second challenge, we leverage ImageBind, a universal embedding model aligning multiple modalities, as the pre-trained audio encoder and introduce an Audio Q-former on top of ImageBind to learn reasonable auditory query embeddings for the LLM module. To align the output of both visual {\&} audio encoders with LLM{'}s embedding space, we first train Video-LLaMA on massive video/image-caption pairs and then tune our model with visual-instruction datasets of moderate amount but higher quality. We found Video-LLaMA shows the ability to perceive and comprehend video content and generate meaningful responses grounded in the visual and auditory information presented in the videos.",
}
| We present Video-LLaMA, a multi-modal framework that empowers Large Language Models (LLMs) with the capability of understanding both visual and auditory content in the video. Video-LLaMA bootstraps cross-modal training from the frozen pre-trained visual {\&} audio encoders and the frozen LLMs. Unlike previous works that complement LLMs to process the visual or audio signals only, Video-LLaMA enables video comprehension by tackling two challenges: (1) capturing the temporal changes in visual scenes, (2) integrating audio-visual signals. To counter the first challenge, we propose a Video Q-former to assemble a pre-trained image encoder into our video encoder and introduce a video-to-text generation task to learn video-language correspondence. For the second challenge, we leverage ImageBind, a universal embedding model aligning multiple modalities, as the pre-trained audio encoder and introduce an Audio Q-former on top of ImageBind to learn reasonable auditory query embeddings for the LLM module. To align the output of both visual {\&} audio encoders with LLM{'}s embedding space, we first train Video-LLaMA on massive video/image-caption pairs and then tune our model with visual-instruction datasets of moderate amount but higher quality. We found Video-LLaMA shows the ability to perceive and comprehend video content and generate meaningful responses grounded in the visual and auditory information presented in the videos. | [
"Zhang, Hang",
"Li, Xin",
"Bing, Lidong"
] | Video-LLaMA: An Instruction-tuned Audio-Visual Language Model for Video Understanding | emnlp-demo.49 | 2306.02858 | [
"https://github.com/damo-nlp-sg/video-llama"
] | https://huggingface.co/papers/2306.02858 | 2 | 18 | 7 | 3 | [
"DAMO-NLP-SG/Video-LLaMA-Series",
"DAMO-NLP-SG/VideoLLaMA2-7B",
"DAMO-NLP-SG/VideoLLaMA2-7B-16F",
"vdo/Video-LLaMA-Series",
"DAMO-NLP-SG/VideoLLaMA2-72B",
"DAMO-NLP-SG/VideoLLaMA2-7B-Base",
"DAMO-NLP-SG/VideoLLaMA2-8x7B",
"DAMO-NLP-SG/VideoLLaMA2-7B-16F-Base",
"DAMO-NLP-SG/VideoLLaMA2-8x7B-Base",
"DAMO-NLP-SG/VideoLLaMA2-72B-Base"
] | [
"DAMO-NLP-SG/Multi-Source-Video-Captioning"
] | [
"DAMO-NLP-SG/Video-LLaMA",
"lixin4ever/VideoLLaMA2",
"comidan/video-llama2-test",
"cocktailpeanut/VideoLLaMA2"
] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.50.bib | https://aclanthology.org/2023.emnlp-demo.50/ | @inproceedings{slobodkin-etal-2023-summhelper,
title = "{S}umm{H}elper: Collaborative Human-Computer Summarization",
author = "Slobodkin, Aviv and
Nachum, Niv and
Amar, Shmuel and
Shapira, Ori and
Dagan, Ido",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.50",
doi = "10.18653/v1/2023.emnlp-demo.50",
pages = "554--565",
abstract = "Current approaches for text summarization are predominantly automatic, with rather limited space for human intervention and control over the process. In this paper, we introduce SummHelper, and screencast demo at \url{https://www.youtube.com/watch?v=nGcknJwGhxk} a 2-phase summarization assistant designed to foster human-machine collaboration. The initial phase involves content selection, where the system recommends potential content, allowing users to accept, modify, or introduce additional selections. The subsequent phase, content consolidation, involves SummHelper generating a coherent summary from these selections, which users can then refine using visual mappings between the summary and the source text. Small-scale user studies reveal the effectiveness of our application, with participants being especially appreciative of the balance between automated guidance and opportunities for personal input.",
}
| Current approaches for text summarization are predominantly automatic, with rather limited space for human intervention and control over the process. In this paper, we introduce SummHelper, and screencast demo at \url{https://www.youtube.com/watch?v=nGcknJwGhxk} a 2-phase summarization assistant designed to foster human-machine collaboration. The initial phase involves content selection, where the system recommends potential content, allowing users to accept, modify, or introduce additional selections. The subsequent phase, content consolidation, involves SummHelper generating a coherent summary from these selections, which users can then refine using visual mappings between the summary and the source text. Small-scale user studies reveal the effectiveness of our application, with participants being especially appreciative of the balance between automated guidance and opportunities for personal input. | [
"Slobodkin, Aviv",
"Nachum, Niv",
"Amar, Shmuel",
"Shapira, Ori",
"Dagan, Ido"
] | SummHelper: Collaborative Human-Computer Summarization | emnlp-demo.50 | 2308.08363 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-demo.51.bib | https://aclanthology.org/2023.emnlp-demo.51/ | @inproceedings{li-etal-2023-modelscope,
title = "{M}odel{S}cope-Agent: Building Your Customizable Agent System with Open-source Large Language Models",
author = "Li, Chenliang and
Chen, He and
Yan, Ming and
Shen, Weizhou and
Xu, Haiyang and
Wu, Zhikai and
Zhang, Zhicheng and
Zhou, Wenmeng and
Chen, Yingda and
Cheng, Chen and
Shi, Hongzhu and
Zhang, Ji and
Huang, Fei and
Zhou, Jingren",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.51",
doi = "10.18653/v1/2023.emnlp-demo.51",
pages = "566--578",
abstract = "Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent frameworks that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with a customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent online demo, library are now publicly available.",
}
| Large language models (LLMs) have recently demonstrated remarkable capabilities to comprehend human intentions, engage in reasoning, and design planning-like behavior. To further unleash the power of LLMs to accomplish complex tasks, there is a growing trend to build agent frameworks that equips LLMs, such as ChatGPT, with tool-use abilities to connect with massive external APIs. In this work, we introduce ModelScope-Agent, a general and customizable agent framework for real-world applications, based on open-source LLMs as controllers. It provides a user-friendly system library, with a customizable engine design to support model training on multiple open-source LLMs, while also enabling seamless integration with both model APIs and common APIs in a unified way. To equip the LLMs with tool-use abilities, a comprehensive framework has been proposed spanning tool-use data collection, tool retrieval, tool registration, memory control, customized model training, and evaluation for practical real-world applications. Finally, we showcase ModelScopeGPT, a real-world intelligent assistant of ModelScope Community based on the ModelScope-Agent framework, which is able to connect open-source LLMs with more than 1000 public AI models and localized community knowledge in ModelScope. The ModelScope-Agent online demo, library are now publicly available. | [
"Li, Chenliang",
"Chen, He",
"Yan, Ming",
"Shen, Weizhou",
"Xu, Haiyang",
"Wu, Zhikai",
"Zhang, Zhicheng",
"Zhou, Wenmeng",
"Chen, Yingda",
"Cheng, Chen",
"Shi, Hongzhu",
"Zhang, Ji",
"Huang, Fei",
"Zhou, Jingren"
] | ModelScope-Agent: Building Your Customizable Agent System with Open-source Large Language Models | emnlp-demo.51 | 2309.00986 | [
"https://github.com/modelscope/modelscope-agent"
] | https://huggingface.co/papers/2309.00986 | 9 | 17 | 1 | 14 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-demo.52.bib | https://aclanthology.org/2023.emnlp-demo.52/ | @inproceedings{bryan-etal-2023-efficientocr,
title = "{E}fficient{OCR}: An Extensible, Open-Source Package for Efficiently Digitizing World Knowledge",
author = "Bryan, Tom and
Carlson, Jacob and
Arora, Abhishek and
Dell, Melissa",
editor = "Feng, Yansong and
Lefever, Els",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: System Demonstrations",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-demo.52",
doi = "10.18653/v1/2023.emnlp-demo.52",
pages = "579--596",
abstract = "Billions of public domain documents remain trapped in hard copy or lack an accurate digitization. Modern natural language processing methods cannot be used to index, retrieve, and summarize their texts; conduct computational textual analyses; or extract information for statistical analyses, and these texts cannot be incorporated into language model training. Given the diversity and sheer quantity of public domain texts, liberating them at scale requires optical character recognition (OCR) that is accurate, extremely cheap to deploy, and sample-efficient to customize to novel collections, languages, and character sets. Existing OCR engines, largely designed for small-scale commercial applications in high resource languages, often fall short of these requirements. EffOCR (EfficientOCR), a novel open-source OCR package, meets both the computational and sample efficiency requirements for liberating texts at scale by abandoning the sequence-to-sequence architecture typically used for OCR, which takes representations from a learned vision model as inputs to a learned language model. Instead, EffOCR models OCR as a character or word-level image retrieval problem. EffOCR is cheap and sample efficient to train, as the model only needs to learn characters{'} visual appearance and not how they are used in sequence to form language. Models in the EffOCR model zoo can be deployed off-the-shelf with only a few lines of code and include lightweight models designed for mobile phones that are extremely cheap to deploy. Importantly, EffOCR also allows for easy, sample efficient customization with a simple model training interface and minimal labeling requirements due to its sample efficiency. We illustrate the utility of EffOCR by cheaply and accurately digitizing 20 million historical U.S. newspaper scans, evaluating zero-shot performance on randomly selected documents from the U.S. National Archives, and accurately digitizing a Japanese document collection for which all other OCR solutions failed.",
}
| Billions of public domain documents remain trapped in hard copy or lack an accurate digitization. Modern natural language processing methods cannot be used to index, retrieve, and summarize their texts; conduct computational textual analyses; or extract information for statistical analyses, and these texts cannot be incorporated into language model training. Given the diversity and sheer quantity of public domain texts, liberating them at scale requires optical character recognition (OCR) that is accurate, extremely cheap to deploy, and sample-efficient to customize to novel collections, languages, and character sets. Existing OCR engines, largely designed for small-scale commercial applications in high resource languages, often fall short of these requirements. EffOCR (EfficientOCR), a novel open-source OCR package, meets both the computational and sample efficiency requirements for liberating texts at scale by abandoning the sequence-to-sequence architecture typically used for OCR, which takes representations from a learned vision model as inputs to a learned language model. Instead, EffOCR models OCR as a character or word-level image retrieval problem. EffOCR is cheap and sample efficient to train, as the model only needs to learn characters{'} visual appearance and not how they are used in sequence to form language. Models in the EffOCR model zoo can be deployed off-the-shelf with only a few lines of code and include lightweight models designed for mobile phones that are extremely cheap to deploy. Importantly, EffOCR also allows for easy, sample efficient customization with a simple model training interface and minimal labeling requirements due to its sample efficiency. We illustrate the utility of EffOCR by cheaply and accurately digitizing 20 million historical U.S. newspaper scans, evaluating zero-shot performance on randomly selected documents from the U.S. National Archives, and accurately digitizing a Japanese document collection for which all other OCR solutions failed. | [
"Bryan, Tom",
"Carlson, Jacob",
"Arora, Abhishek",
"Dell, Melissa"
] | EfficientOCR: An Extensible, Open-Source Package for Efficiently Digitizing World Knowledge | emnlp-demo.52 | 2310.10050 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.1.bib | https://aclanthology.org/2023.emnlp-industry.1/ | @inproceedings{cao-etal-2023-beautifulprompt,
title = "{B}eautiful{P}rompt: Towards Automatic Prompt Engineering for Text-to-Image Synthesis",
author = "Cao, Tingfeng and
Wang, Chengyu and
Liu, Bingyan and
Wu, Ziheng and
Zhu, Jinhui and
Huang, Jun",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.1",
doi = "10.18653/v1/2023.emnlp-industry.1",
pages = "1--11",
abstract = "Recently, diffusion-based deep generative models (e.g., Stable Diffusion) have shown impressive results in text-to-image synthesis. However, current text-to-image models often require multiple passes of prompt engineering by humans in order to produce satisfactory results for real-world applications. We propose BeautifulPrompt, a deep generative model to produce high-quality prompts from very simple raw descriptions, which enables diffusion-based models to generate more beautiful images. In our work, we first fine-tuned the BeautifulPrompt model over low-quality and high-quality collecting prompt pairs. Then, to ensure that our generated prompts can generate more beautiful images, we further propose a Reinforcement Learning with Visual AI Feedback technique to fine-tune our model to maximize the reward values of the generated prompts, where the reward values are calculated based on the PickScore and the Aesthetic Scores. Our results demonstrate that learning from visual AI feedback promises the potential to improve the quality of generated prompts and images significantly. We further showcase the integration of BeautifulPrompt to a cloud-native AI platform to provide better text-to-image generation service in the cloud.",
}
| Recently, diffusion-based deep generative models (e.g., Stable Diffusion) have shown impressive results in text-to-image synthesis. However, current text-to-image models often require multiple passes of prompt engineering by humans in order to produce satisfactory results for real-world applications. We propose BeautifulPrompt, a deep generative model to produce high-quality prompts from very simple raw descriptions, which enables diffusion-based models to generate more beautiful images. In our work, we first fine-tuned the BeautifulPrompt model over low-quality and high-quality collecting prompt pairs. Then, to ensure that our generated prompts can generate more beautiful images, we further propose a Reinforcement Learning with Visual AI Feedback technique to fine-tune our model to maximize the reward values of the generated prompts, where the reward values are calculated based on the PickScore and the Aesthetic Scores. Our results demonstrate that learning from visual AI feedback promises the potential to improve the quality of generated prompts and images significantly. We further showcase the integration of BeautifulPrompt to a cloud-native AI platform to provide better text-to-image generation service in the cloud. | [
"Cao, Tingfeng",
"Wang, Chengyu",
"Liu, Bingyan",
"Wu, Ziheng",
"Zhu, Jinhui",
"Huang, Jun"
] | BeautifulPrompt: Towards Automatic Prompt Engineering for Text-to-Image Synthesis | emnlp-industry.1 | 2311.06752 | [
""
] | https://huggingface.co/papers/2311.06752 | 0 | 1 | 0 | 6 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.2.bib | https://aclanthology.org/2023.emnlp-industry.2/ | @inproceedings{mao-etal-2023-enhancing,
title = "Enhancing Language Model with Unit Test Techniques for Efficient Regular Expression Generation",
author = "Mao, Chenhui and
Lin, Xiexiong and
Jin, Xin and
Zhang, Xin",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.2",
doi = "10.18653/v1/2023.emnlp-industry.2",
pages = "12--19",
abstract = "Recent research has investigated the use of generative language models to produce regular expressions with semantic-based approaches. However, these approaches have shown shortcomings in practical applications, particularly in terms of functional correctness, which refers to the ability to reproduce the intended function inputs by the user. To address this issue, we present a novel method called Unit-Test Driven Reinforcement Learning (UTD-RL). Our approach differs from previous methods by taking into account the crucial aspect of functional correctness and transforming it into a differentiable gradient feedback using policy gradient techniques. In which functional correctness can be evaluated through Unit Tests, a testing method that ensures regular expressions meets its design and performs as intended. Experiments conducted on three public datasets demonstrate the effectiveness of the proposed method in generating regular expressions. This method has been employed in a regulatory scenario where regular expressions can be utilized to ensure that all online content is free from non-compliant elements, thereby significantly reducing the workload of relevant personnel.",
}
| Recent research has investigated the use of generative language models to produce regular expressions with semantic-based approaches. However, these approaches have shown shortcomings in practical applications, particularly in terms of functional correctness, which refers to the ability to reproduce the intended function inputs by the user. To address this issue, we present a novel method called Unit-Test Driven Reinforcement Learning (UTD-RL). Our approach differs from previous methods by taking into account the crucial aspect of functional correctness and transforming it into a differentiable gradient feedback using policy gradient techniques. In which functional correctness can be evaluated through Unit Tests, a testing method that ensures regular expressions meets its design and performs as intended. Experiments conducted on three public datasets demonstrate the effectiveness of the proposed method in generating regular expressions. This method has been employed in a regulatory scenario where regular expressions can be utilized to ensure that all online content is free from non-compliant elements, thereby significantly reducing the workload of relevant personnel. | [
"Mao, Chenhui",
"Lin, Xiexiong",
"Jin, Xin",
"Zhang, Xin"
] | Enhancing Language Model with Unit Test Techniques for Efficient Regular Expression Generation | emnlp-industry.2 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.3.bib | https://aclanthology.org/2023.emnlp-industry.3/ | @inproceedings{udagawa-etal-2023-comparative,
title = "A Comparative Analysis of Task-Agnostic Distillation Methods for Compressing Transformer Language Models",
author = "Udagawa, Takuma and
Trivedi, Aashka and
Merler, Michele and
Bhattacharjee, Bishwaranjan",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.3",
doi = "10.18653/v1/2023.emnlp-industry.3",
pages = "20--31",
abstract = "Large language models have become a vital component in modern NLP, achieving state of the art performance in a variety of tasks. However, they are often inefficient for real-world deployment due to their expensive inference costs. Knowledge distillation is a promising technique to improve their efficiency while retaining most of their effectiveness. In this paper, we reproduce, compare and analyze several representative methods for task-agnostic (general-purpose) distillation of Transformer language models. Our target of study includes Output Distribution (OD) transfer, Hidden State (HS) transfer with various layer mapping strategies, and Multi-Head Attention (MHA) transfer based on MiniLMv2. Through our extensive experiments, we study the effectiveness of each method for various student architectures in both monolingual (English) and multilingual settings. Overall, we show that MHA transfer based on MiniLMv2 is generally the best option for distillation and explain the potential reasons behind its success. Moreover, we show that HS transfer remains as a competitive baseline, especially under a sophisticated layer mapping strategy, while OD transfer consistently lags behind other approaches. Findings from this study helped us deploy efficient yet effective student models for latency-critical applications.",
}
| Large language models have become a vital component in modern NLP, achieving state of the art performance in a variety of tasks. However, they are often inefficient for real-world deployment due to their expensive inference costs. Knowledge distillation is a promising technique to improve their efficiency while retaining most of their effectiveness. In this paper, we reproduce, compare and analyze several representative methods for task-agnostic (general-purpose) distillation of Transformer language models. Our target of study includes Output Distribution (OD) transfer, Hidden State (HS) transfer with various layer mapping strategies, and Multi-Head Attention (MHA) transfer based on MiniLMv2. Through our extensive experiments, we study the effectiveness of each method for various student architectures in both monolingual (English) and multilingual settings. Overall, we show that MHA transfer based on MiniLMv2 is generally the best option for distillation and explain the potential reasons behind its success. Moreover, we show that HS transfer remains as a competitive baseline, especially under a sophisticated layer mapping strategy, while OD transfer consistently lags behind other approaches. Findings from this study helped us deploy efficient yet effective student models for latency-critical applications. | [
"Udagawa, Takuma",
"Trivedi, Aashka",
"Merler, Michele",
"Bhattacharjee, Bishwaranjan"
] | A Comparative Analysis of Task-Agnostic Distillation Methods for Compressing Transformer Language Models | emnlp-industry.3 | 2310.08797 | [
""
] | https://huggingface.co/papers/2310.08797 | 0 | 1 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.4.bib | https://aclanthology.org/2023.emnlp-industry.4/ | @inproceedings{zhang-etal-2023-towards-effective,
title = "Towards Effective Automatic Debt Collection with Persona Awareness",
author = "Zhang, Tong and
Liu, Junhong and
Huang, Chen and
Liu, Jia and
Liang, Hongru and
Wen, Zujie and
Lei, Wenqiang",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.4",
doi = "10.18653/v1/2023.emnlp-industry.4",
pages = "32--45",
abstract = "Understanding debtor personas is crucial for collectors to empathize with debtors and develop more effective collection strategies. In this paper, we take the first step towards comprehensively investigating the significance of debtor personas and present a successful commercial practice on automatic debt collection agents. Specifically, we organize the debtor personas into a taxonomy and construct a persona-aware conversation dataset. Building upon it, we implement a simple yet effective persona-aware agent called PAD. After two-month online testing, PAD increases the recovery rate by 3.31{\%} and collects an additional {\textasciitilde}100K RMB. Our commercial practice brings inspiration to the debt collection industry by providing an effective automatic solution.",
}
| Understanding debtor personas is crucial for collectors to empathize with debtors and develop more effective collection strategies. In this paper, we take the first step towards comprehensively investigating the significance of debtor personas and present a successful commercial practice on automatic debt collection agents. Specifically, we organize the debtor personas into a taxonomy and construct a persona-aware conversation dataset. Building upon it, we implement a simple yet effective persona-aware agent called PAD. After two-month online testing, PAD increases the recovery rate by 3.31{\%} and collects an additional {\textasciitilde}100K RMB. Our commercial practice brings inspiration to the debt collection industry by providing an effective automatic solution. | [
"Zhang, Tong",
"Liu, Junhong",
"Huang, Chen",
"Liu, Jia",
"Liang, Hongru",
"Wen, Zujie",
"Lei, Wenqiang"
] | Towards Effective Automatic Debt Collection with Persona Awareness | emnlp-industry.4 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.5.bib | https://aclanthology.org/2023.emnlp-industry.5/ | @inproceedings{tiwari-etal-2023-gatekeeper,
title = "Gatekeeper to save {COGS} and improve efficiency of Text Prediction",
author = "Tiwari, Nidhi and
Kola, Sneha and
Milunovic, Milos and
Chen, Si-qing and
Slavkovski, Marjan",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.5",
doi = "10.18653/v1/2023.emnlp-industry.5",
pages = "46--53",
abstract = "The text prediction (TP) workflow calls a Large Language Model (LLM), almost, after every character to get subsequent sequence of characters, till user accepts a suggestion. The confidence score of the prediction is commonly used for filtering the results to ensure that only correct predictions are shown to user. As LLMs require massive amounts of computation and storage, such an approach incurs network and high execution cost. So, we propose a Model gatekeeper (GK) to stop the LLM calls that will result in incorrect predictions at client application level itself. This way a GK can save cost of model inference and improve user experience by not showing the incorrect predictions. We demonstrate that use of a model gatekeeper saved approx 46.6{\%} of COGS for TP, at the cost of approx 4.5{\%} loss in character saving. Use of GK also improved the efficiency (suggestion rate) of TP model by 73{\%}.",
}
| The text prediction (TP) workflow calls a Large Language Model (LLM), almost, after every character to get subsequent sequence of characters, till user accepts a suggestion. The confidence score of the prediction is commonly used for filtering the results to ensure that only correct predictions are shown to user. As LLMs require massive amounts of computation and storage, such an approach incurs network and high execution cost. So, we propose a Model gatekeeper (GK) to stop the LLM calls that will result in incorrect predictions at client application level itself. This way a GK can save cost of model inference and improve user experience by not showing the incorrect predictions. We demonstrate that use of a model gatekeeper saved approx 46.6{\%} of COGS for TP, at the cost of approx 4.5{\%} loss in character saving. Use of GK also improved the efficiency (suggestion rate) of TP model by 73{\%}. | [
"Tiwari, Nidhi",
"Kola, Sneha",
"Milunovic, Milos",
"Chen, Si-qing",
"Slavkovski, Marjan"
] | Gatekeeper to save COGS and improve efficiency of Text Prediction | emnlp-industry.5 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.6.bib | https://aclanthology.org/2023.emnlp-industry.6/ | @inproceedings{brown-etal-2023-efficient,
title = "Efficient Transformer Knowledge Distillation: A Performance Review",
author = "Brown, Nathan and
Williamson, Ashton and
Anderson, Tahj and
Lawrence, Logan",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.6",
doi = "10.18653/v1/2023.emnlp-industry.6",
pages = "54--65",
abstract = "As pretrained transformer language models continue to achieve state-of-the-art performance, the Natural Language Processing community has pushed for advances in model compression and efficient attention mechanisms to address high computational requirements and limited input sequence length. Despite these separate efforts, no investigation has been done into the intersection of these two fields. In this work, we provide an evaluation of model compression via knowledge distillation on efficient attention transformers. We provide cost-performance trade-offs for the compression of state-of-the-art efficient attention architectures and the gains made in performance in comparison to their full attention counterparts. Furthermore, we introduce a new long-context Named Entity Recognition dataset, GONERD, to train and test the performance of NER models on long sequences. We find that distilled efficient attention transformers can preserve a significant amount of original model performance, preserving up to \textbf{98.6{\%}} across short-context tasks (GLUE, SQUAD, CoNLL-2003), up to \textbf{94.6{\%}} across long-context Question-and-Answering tasks (HotpotQA, TriviaQA), and up to \textbf{98.8{\%}} on long-context Named Entity Recognition (GONERD), while decreasing inference times by up to \textbf{57.8{\%}}. We find that, for most models on most tasks, performing knowledge distillation is an effective method to yield high-performing efficient attention models with low costs.",
}
| As pretrained transformer language models continue to achieve state-of-the-art performance, the Natural Language Processing community has pushed for advances in model compression and efficient attention mechanisms to address high computational requirements and limited input sequence length. Despite these separate efforts, no investigation has been done into the intersection of these two fields. In this work, we provide an evaluation of model compression via knowledge distillation on efficient attention transformers. We provide cost-performance trade-offs for the compression of state-of-the-art efficient attention architectures and the gains made in performance in comparison to their full attention counterparts. Furthermore, we introduce a new long-context Named Entity Recognition dataset, GONERD, to train and test the performance of NER models on long sequences. We find that distilled efficient attention transformers can preserve a significant amount of original model performance, preserving up to \textbf{98.6{\%}} across short-context tasks (GLUE, SQUAD, CoNLL-2003), up to \textbf{94.6{\%}} across long-context Question-and-Answering tasks (HotpotQA, TriviaQA), and up to \textbf{98.8{\%}} on long-context Named Entity Recognition (GONERD), while decreasing inference times by up to \textbf{57.8{\%}}. We find that, for most models on most tasks, performing knowledge distillation is an effective method to yield high-performing efficient attention models with low costs. | [
"Brown, Nathan",
"Williamson, Ashton",
"Anderson, Tahj",
"Lawrence, Logan"
] | Efficient Transformer Knowledge Distillation: A Performance Review | emnlp-industry.6 | 2311.13657 | [
""
] | https://huggingface.co/papers/2311.13657 | 2 | 1 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.7.bib | https://aclanthology.org/2023.emnlp-industry.7/ | @inproceedings{ji-etal-2023-cdd,
title = "{CDD}: A Large Scale Dataset for Legal Intelligence Research",
author = "Ji, Changzhen and
Zhang, Yating and
Jatowt, Adam and
Wu, Haipang",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.7",
doi = "10.18653/v1/2023.emnlp-industry.7",
pages = "66--73",
abstract = "As an important application of Artificial Intelligence, legal intelligence has recently attracted the attention of many researchers. Previous works investigated diverse issues like predicting crimes, predicting outcomes of judicial debates, or extracting information/knowledge from various kinds of legal documents. Although many advances have been made, the research on supporting prediction of court judgments remains relatively scarce, while the lack of large-scale data resources limits the development of this research.In this paper, we present a novel, large-size Court Debate Dataset (CDD), which includes 30,481 court cases, totaling 1,144,425 utterances. CDD contains real-world conversations involving judges, plaintiffs and defendants in court trials. To construct this dataset we have invited experienced judges to design appropriate labels for data records. We then asked law school students to provide annotations based on the defined labels. The dataset can be applied to several downstream tasks, such as text summarization, dialogue generation, text classification, etc. We introduce the details of the different tasks in the rapidly developing field of legal intelligence, the research of which can be fostered thanks to our dataset, and we provide the corresponding benchmark performance.",
}
| As an important application of Artificial Intelligence, legal intelligence has recently attracted the attention of many researchers. Previous works investigated diverse issues like predicting crimes, predicting outcomes of judicial debates, or extracting information/knowledge from various kinds of legal documents. Although many advances have been made, the research on supporting prediction of court judgments remains relatively scarce, while the lack of large-scale data resources limits the development of this research.In this paper, we present a novel, large-size Court Debate Dataset (CDD), which includes 30,481 court cases, totaling 1,144,425 utterances. CDD contains real-world conversations involving judges, plaintiffs and defendants in court trials. To construct this dataset we have invited experienced judges to design appropriate labels for data records. We then asked law school students to provide annotations based on the defined labels. The dataset can be applied to several downstream tasks, such as text summarization, dialogue generation, text classification, etc. We introduce the details of the different tasks in the rapidly developing field of legal intelligence, the research of which can be fostered thanks to our dataset, and we provide the corresponding benchmark performance. | [
"Ji, Changzhen",
"Zhang, Yating",
"Jatowt, Adam",
"Wu, Haipang"
] | CDD: A Large Scale Dataset for Legal Intelligence Research | emnlp-industry.7 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.8.bib | https://aclanthology.org/2023.emnlp-industry.8/ | @inproceedings{tits-2023-must,
title = "{MUST}{\&}{P}-{SRL}: Multi-lingual and Unified Syllabification in Text and Phonetic Domains for Speech Representation Learning",
author = "Tits, No{\'e}",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.8",
doi = "10.18653/v1/2023.emnlp-industry.8",
pages = "74--82",
abstract = "In this paper, we present a methodology for linguistic feature extraction, focusing particularly on automatically syllabifying words in multiple languages, with a design to be compatible with a forced-alignment tool, the Montreal Forced Aligner (MFA). In both the textual and phonetic domains, our method focuses on the extraction of phonetic transcriptions from text, stress marks, and a unified automatic syllabification (in text and phonetic domains). The system was built with open-source components and resources. Through an ablation study, we demonstrate the efficacy of our approach in automatically syllabifying words from several languages (English, French and Spanish). Additionally, we apply the technique to the transcriptions of the CMU ARCTIC dataset, generating valuable annotations available online (https://github.com/noetits/MUST{\_}P-SRL) that are ideal for speech representation learning, speech unit discovery, and disentanglement of speech factors in several speech-related fields.",
}
| In this paper, we present a methodology for linguistic feature extraction, focusing particularly on automatically syllabifying words in multiple languages, with a design to be compatible with a forced-alignment tool, the Montreal Forced Aligner (MFA). In both the textual and phonetic domains, our method focuses on the extraction of phonetic transcriptions from text, stress marks, and a unified automatic syllabification (in text and phonetic domains). The system was built with open-source components and resources. Through an ablation study, we demonstrate the efficacy of our approach in automatically syllabifying words from several languages (English, French and Spanish). Additionally, we apply the technique to the transcriptions of the CMU ARCTIC dataset, generating valuable annotations available online (https://github.com/noetits/MUST{\_}P-SRL) that are ideal for speech representation learning, speech unit discovery, and disentanglement of speech factors in several speech-related fields. | [
"Tits, No{\\'e}"
] | MUST&P-SRL: Multi-lingual and Unified Syllabification in Text and Phonetic Domains for Speech Representation Learning | emnlp-industry.8 | 2310.11541 | [
"https://github.com/noetits/must_p-srl"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.9.bib | https://aclanthology.org/2023.emnlp-industry.9/ | @inproceedings{belyi-etal-2023-personalized,
title = "Personalized Dense Retrieval on Global Index for Voice-enabled Conversational Systems",
author = "Belyi, Masha and
Dzialo, Charlotte and
Dwivedi, Chaitanya and
Muppidi, Prajit and
Shimizu, Kanna",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.9",
doi = "10.18653/v1/2023.emnlp-industry.9",
pages = "83--92",
abstract = "Voice-controlled AI dialogue systems are susceptible to noise from phonetic variations and failure to resolve ambiguous entities. Typically, personalized entity resolution (ER) and/or query rewrites (QR) are deployed to recover from these error modes. Previous work in this field achieves personalization by constraining retrieval search space to personalized indices built from user{'}s historical interactions with the device. While constrained retrieval achieves high precision, predictions are limited to entities in recent user history, which offers low coverage of future requests. Further, maintaining individual indices for millions of users is memory intensive and difficult to scale. In this work, we propose a personalized entity retrieval system that is robust to phonetic noise and ambiguity but is not limited to a personalized index. We achieve this by embedding user listening preferences into a contextual query embedding used in retrieval. We demonstrate our model{'}s ability to correct multiple error modes and show 91{\%} improvement over baseline on the entity retrieval task. Finally, we optimize the end-to-end approach to fit within online latency constraints while maintaining gains in performance.",
}
| Voice-controlled AI dialogue systems are susceptible to noise from phonetic variations and failure to resolve ambiguous entities. Typically, personalized entity resolution (ER) and/or query rewrites (QR) are deployed to recover from these error modes. Previous work in this field achieves personalization by constraining retrieval search space to personalized indices built from user{'}s historical interactions with the device. While constrained retrieval achieves high precision, predictions are limited to entities in recent user history, which offers low coverage of future requests. Further, maintaining individual indices for millions of users is memory intensive and difficult to scale. In this work, we propose a personalized entity retrieval system that is robust to phonetic noise and ambiguity but is not limited to a personalized index. We achieve this by embedding user listening preferences into a contextual query embedding used in retrieval. We demonstrate our model{'}s ability to correct multiple error modes and show 91{\%} improvement over baseline on the entity retrieval task. Finally, we optimize the end-to-end approach to fit within online latency constraints while maintaining gains in performance. | [
"Belyi, Masha",
"Dzialo, Charlotte",
"Dwivedi, Chaitanya",
"Muppidi, Prajit",
"Shimizu, Kanna"
] | Personalized Dense Retrieval on Global Index for Voice-enabled Conversational Systems | emnlp-industry.9 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.10.bib | https://aclanthology.org/2023.emnlp-industry.10/ | @inproceedings{wang-etal-2023-text2topic,
title = "{T}ext2{T}opic: Multi-Label Text Classification System for Efficient Topic Detection in User Generated Content with Zero-Shot Capabilities",
author = "Wang, Fengjun and
Beladev, Moran and
Kleinfeld, Ofri and
Frayerman, Elina and
Shachar, Tal and
Fainman, Eran and
Lastmann Assaraf, Karen and
Mizrachi, Sarai and
Wang, Benjamin",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.10",
doi = "10.18653/v1/2023.emnlp-industry.10",
pages = "93--103",
abstract = "Multi-label text classification is a critical task in the industry. It helps to extract structured information from large amount of textual data. We propose Text to Topic (Text2Topic), which achieves high multi-label classification performance by employing a Bi-Encoder Transformer architecture that utilizes concatenation, subtraction, and multiplication of embeddings on both text and topic. Text2Topic also supports zero-shot predictions, produces domain-specific text embeddings, and enables production-scale batch-inference with high throughput. The final model achieves accurate and comprehensive results compared to state-of-the-art baselines, including large language models (LLMs). In this study, a total of 239 topics are defined, and around 1.6 million text-topic pairs annotations (in which 200K are positive) are collected on approximately 120K texts from 3 main data sources on Booking.com. The data is collected with optimized smart sampling and partial labeling. The final Text2Topic model is deployed on a real-world stream processing platform, and it outperforms other models with 92.9{\%} micro mAP, as well as a 75.8{\%} macro mAP score. We summarize the modeling choices which are extensively tested through ablation studies, and share detailed in-production decision-making steps.",
}
| Multi-label text classification is a critical task in the industry. It helps to extract structured information from large amount of textual data. We propose Text to Topic (Text2Topic), which achieves high multi-label classification performance by employing a Bi-Encoder Transformer architecture that utilizes concatenation, subtraction, and multiplication of embeddings on both text and topic. Text2Topic also supports zero-shot predictions, produces domain-specific text embeddings, and enables production-scale batch-inference with high throughput. The final model achieves accurate and comprehensive results compared to state-of-the-art baselines, including large language models (LLMs). In this study, a total of 239 topics are defined, and around 1.6 million text-topic pairs annotations (in which 200K are positive) are collected on approximately 120K texts from 3 main data sources on Booking.com. The data is collected with optimized smart sampling and partial labeling. The final Text2Topic model is deployed on a real-world stream processing platform, and it outperforms other models with 92.9{\%} micro mAP, as well as a 75.8{\%} macro mAP score. We summarize the modeling choices which are extensively tested through ablation studies, and share detailed in-production decision-making steps. | [
"Wang, Fengjun",
"Beladev, Moran",
"Kleinfeld, Ofri",
"Frayerman, Elina",
"Shachar, Tal",
"Fainman, Eran",
"Lastmann Assaraf, Karen",
"Mizrachi, Sarai",
"Wang, Benjamin"
] | Text2Topic: Multi-Label Text Classification System for Efficient Topic Detection in User Generated Content with Zero-Shot Capabilities | emnlp-industry.10 | 2310.14817 | [
""
] | https://huggingface.co/papers/2310.14817 | 1 | 0 | 0 | 9 | [] | [
"Booking-com/accommodation-reviews"
] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.11.bib | https://aclanthology.org/2023.emnlp-industry.11/ | @inproceedings{koo-etal-2023-deep,
title = "Deep Metric Learning to Hierarchically Rank - An Application in Product Retrieval",
author = "Koo, Kee Kiat and
Joshi, Ashutosh and
Reddy, Nishaanth and
Bouyarmane, Karim and
Tutar, Ismail and
Petricek, Vaclav and
Yuan, Changhe",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.11",
doi = "10.18653/v1/2023.emnlp-industry.11",
pages = "104--112",
abstract = "Most e-commerce search engines use customer behavior signals to augment lexical matching and improve search relevance. Many e-commerce companies like Amazon, Alibaba, Ebay etc. operate in multiple countries with country specific stores. However, customer behavior data is sparse in newer stores. To compensate for sparsity of behavioral data in low traffic stores, search engines often use cross-listed products in some form. However, cross-listing across stores is not uniform and in many cases itself sparse. In this paper, we develop a model to identify duplicate and near-duplicate products across stores. Such a model can be used to unify product catalogs worldwide, improve product meta-data or as in our case, use near-duplicate products across multiple to improve search relevance. To capture the product similarity hierarchy, we develop an approach that integrates retrieval and ranking tasks across multiple languages in a single step based on a novel Hierarchical Ranked Multi Similarity (HRMS) Loss that combines Multi-Similarity (MS) loss and Hierarchical Triplet Loss to learn a hierarchical metric space. Our method outperforms strong baselines in terms of catalog coverage and precision of the mappings. We also show via online A/B tests that the product mappings found by our method are successful at improving search quality in low traffic stores, measured in rate of searches with at least one click, significantly by 0.8{\%} and improving cold start product engagement measured as new product clicks significantly by 1.72{\%} in established stores.",
}
| Most e-commerce search engines use customer behavior signals to augment lexical matching and improve search relevance. Many e-commerce companies like Amazon, Alibaba, Ebay etc. operate in multiple countries with country specific stores. However, customer behavior data is sparse in newer stores. To compensate for sparsity of behavioral data in low traffic stores, search engines often use cross-listed products in some form. However, cross-listing across stores is not uniform and in many cases itself sparse. In this paper, we develop a model to identify duplicate and near-duplicate products across stores. Such a model can be used to unify product catalogs worldwide, improve product meta-data or as in our case, use near-duplicate products across multiple to improve search relevance. To capture the product similarity hierarchy, we develop an approach that integrates retrieval and ranking tasks across multiple languages in a single step based on a novel Hierarchical Ranked Multi Similarity (HRMS) Loss that combines Multi-Similarity (MS) loss and Hierarchical Triplet Loss to learn a hierarchical metric space. Our method outperforms strong baselines in terms of catalog coverage and precision of the mappings. We also show via online A/B tests that the product mappings found by our method are successful at improving search quality in low traffic stores, measured in rate of searches with at least one click, significantly by 0.8{\%} and improving cold start product engagement measured as new product clicks significantly by 1.72{\%} in established stores. | [
"Koo, Kee Kiat",
"Joshi, Ashutosh",
"Reddy, Nishaanth",
"Bouyarmane, Karim",
"Tutar, Ismail",
"Petricek, Vaclav",
"Yuan, Changhe"
] | Deep Metric Learning to Hierarchically Rank - An Application in Product Retrieval | emnlp-industry.11 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.12.bib | https://aclanthology.org/2023.emnlp-industry.12/ | @inproceedings{park-you-2023-pretrained,
title = "A Pretrained Language Model for Cyber Threat Intelligence",
author = "Park, Youngja and
You, Weiqiu",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.12",
doi = "10.18653/v1/2023.emnlp-industry.12",
pages = "113--122",
abstract = "We present a new BERT model for the cybersecurity domain, CTI-BERT, which can improve the accuracy of cyber threat intelligence (CTI) extraction, enabling organizations to better defend against potential cyber threats. We provide detailed information about the domain corpus collection, the training methodology and its effectiveness for a variety of NLP tasks for the cybersecurity domain. The experiments show that CTI-BERT significantly outperforms several general-domain and security-domain models for these cybersecurity applications indicating that the training data and methodology have a significant impact on the model performance.",
}
| We present a new BERT model for the cybersecurity domain, CTI-BERT, which can improve the accuracy of cyber threat intelligence (CTI) extraction, enabling organizations to better defend against potential cyber threats. We provide detailed information about the domain corpus collection, the training methodology and its effectiveness for a variety of NLP tasks for the cybersecurity domain. The experiments show that CTI-BERT significantly outperforms several general-domain and security-domain models for these cybersecurity applications indicating that the training data and methodology have a significant impact on the model performance. | [
"Park, Youngja",
"You, Weiqiu"
] | A Pretrained Language Model for Cyber Threat Intelligence | emnlp-industry.12 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.13.bib | https://aclanthology.org/2023.emnlp-industry.13/ | @inproceedings{tian-etal-2023-samp,
title = "{SAMP}: A Model Inference Toolkit of Post-Training Quantization for Text Processing via Self-Adaptive Mixed-Precision",
author = "Tian, Rong and
Zhao, Zijing and
Liu, Weijie and
Liu, Haoyan and
Mao, Weiquan and
Zhao, Zhe and
Zhou, Kan",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.13",
doi = "10.18653/v1/2023.emnlp-industry.13",
pages = "123--130",
abstract = "The latest industrial inference engines, such as FasterTransformer and TurboTransformers, have verified that half-precision floating point (FP16) and 8-bit integer (INT8) quantization can greatly improve model inference speed. However, the existing INT8 quantization methods are too complicated, and improper usage will lead to model performance damage greatly. In this paper, we develop a toolkit for users to easily quantize their models for inference, in which Self-Adaptive Mixed-Precision (SAMP) is proposed to automatically control quantization rate by a mixed-precision architecture to balance model accuracy and efficiency. Experimental results show that our SAMP toolkit has a higher speedup than PyTorch and FasterTransformer while ensuring the required accuracy. In addition, SAMP is based on a modular design, decoupling the tokenizer, embedding, encoder and target layers, which allows users to handle various downstream tasks and can be seamlessly integrated into PyTorch.",
}
| The latest industrial inference engines, such as FasterTransformer and TurboTransformers, have verified that half-precision floating point (FP16) and 8-bit integer (INT8) quantization can greatly improve model inference speed. However, the existing INT8 quantization methods are too complicated, and improper usage will lead to model performance damage greatly. In this paper, we develop a toolkit for users to easily quantize their models for inference, in which Self-Adaptive Mixed-Precision (SAMP) is proposed to automatically control quantization rate by a mixed-precision architecture to balance model accuracy and efficiency. Experimental results show that our SAMP toolkit has a higher speedup than PyTorch and FasterTransformer while ensuring the required accuracy. In addition, SAMP is based on a modular design, decoupling the tokenizer, embedding, encoder and target layers, which allows users to handle various downstream tasks and can be seamlessly integrated into PyTorch. | [
"Tian, Rong",
"Zhao, Zijing",
"Liu, Weijie",
"Liu, Haoyan",
"Mao, Weiquan",
"Zhao, Zhe",
"Zhou, Kan"
] | SAMP: A Model Inference Toolkit of Post-Training Quantization for Text Processing via Self-Adaptive Mixed-Precision | emnlp-industry.13 | 2209.09130 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.14.bib | https://aclanthology.org/2023.emnlp-industry.14/ | @inproceedings{agrawal-etal-2023-kd,
title = "{KD}-Boost: Boosting Real-Time Semantic Matching in {E}-commerce with Knowledge Distillation",
author = "Agrawal, Sanjay and
Sembium, Vivek and
M S, Ankith",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.14",
doi = "10.18653/v1/2023.emnlp-industry.14",
pages = "131--141",
abstract = "Real-time semantic matching is vital to web and product search. Transformer-based models have shown to be highly effective at encoding queries into an embedding space where semantically similar entities (queries or results) are in close proximity. However, the computational complexity of large transformer models limits their utilization for real-time matching. In this paper, we propose KD-Boost, a novel knowledge distillation algorithm designed for real-time semantic matching. KD-Boost trains low latency accurate student models by leveraging soft labels from a teacher model as well as ground truth via pairwise query-product and query-query signal derived from direct audits, user behavior, and taxonomy-based data using custom loss functions. Experiments on internal and external e-commerce datasets demonstrate an improvement of 2-3{\%} ROC-AUC compared to training student models directly, outperforming teacher and SOTA knowledge distillation benchmarks. Simulated online A/B tests using KD-Boost for automated Query Reformulation (QR) indicate a 6.31{\%} increase in query-to-query matching, 2.76{\%} increase in product coverage, and a 2.19{\%} improvement in relevance.",
}
| Real-time semantic matching is vital to web and product search. Transformer-based models have shown to be highly effective at encoding queries into an embedding space where semantically similar entities (queries or results) are in close proximity. However, the computational complexity of large transformer models limits their utilization for real-time matching. In this paper, we propose KD-Boost, a novel knowledge distillation algorithm designed for real-time semantic matching. KD-Boost trains low latency accurate student models by leveraging soft labels from a teacher model as well as ground truth via pairwise query-product and query-query signal derived from direct audits, user behavior, and taxonomy-based data using custom loss functions. Experiments on internal and external e-commerce datasets demonstrate an improvement of 2-3{\%} ROC-AUC compared to training student models directly, outperforming teacher and SOTA knowledge distillation benchmarks. Simulated online A/B tests using KD-Boost for automated Query Reformulation (QR) indicate a 6.31{\%} increase in query-to-query matching, 2.76{\%} increase in product coverage, and a 2.19{\%} improvement in relevance. | [
"Agrawal, Sanjay",
"Sembium, Vivek",
"M S, Ankith"
] | KD-Boost: Boosting Real-Time Semantic Matching in E-commerce with Knowledge Distillation | emnlp-industry.14 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.15.bib | https://aclanthology.org/2023.emnlp-industry.15/ | @inproceedings{zhang-etal-2023-multi-teacher,
title = "Multi-teacher Distillation for Multilingual Spelling Correction",
author = "Zhang, Jingfen and
Guo, Xuan and
Bodapati, Sravan and
Potts, Christopher",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.15",
doi = "10.18653/v1/2023.emnlp-industry.15",
pages = "142--151",
abstract = "Accurate spelling correction is a critical step in modern search interfaces, especially in an era of mobile devices and speech-to-text interfaces. For services that are deployed around the world, this poses a significant challenge for multilingual NLP: spelling errors need to be caught and corrected in all languages, and even in queries that use multiple languages. In this paper, we tackle this challenge using multi-teacher distillation. On our approach, a monolingual teacher model is trained for each language/locale, and these individual models are distilled into a single multilingual student model intended to serve all languages/locales. In experiments using open-source data as well as customer data from a worldwide search service, we show that this leads to highly effective spelling correction models that can meet the tight latency requirements of deployed services.",
}
| Accurate spelling correction is a critical step in modern search interfaces, especially in an era of mobile devices and speech-to-text interfaces. For services that are deployed around the world, this poses a significant challenge for multilingual NLP: spelling errors need to be caught and corrected in all languages, and even in queries that use multiple languages. In this paper, we tackle this challenge using multi-teacher distillation. On our approach, a monolingual teacher model is trained for each language/locale, and these individual models are distilled into a single multilingual student model intended to serve all languages/locales. In experiments using open-source data as well as customer data from a worldwide search service, we show that this leads to highly effective spelling correction models that can meet the tight latency requirements of deployed services. | [
"Zhang, Jingfen",
"Guo, Xuan",
"Bodapati, Sravan",
"Potts, Christopher"
] | Multi-teacher Distillation for Multilingual Spelling Correction | emnlp-industry.15 | 2311.11518 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.16.bib | https://aclanthology.org/2023.emnlp-industry.16/ | @inproceedings{chen-etal-2023-named,
title = "Does Named Entity Recognition Truly Not Scale Up to Real-world Product Attribute Extraction?",
author = "Chen, Wei-Te and
Shinzato, Keiji and
Yoshinaga, Naoki and
Xia, Yandi",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.16",
doi = "10.18653/v1/2023.emnlp-industry.16",
pages = "152--159",
abstract = "The key challenge in the attribute-value extraction (AVE) task from e-commerce sites is the scalability to diverse attributes for a large number of products in real-world e-commerce sites. To make AVE scalable to diverse attributes, recent researchers adopted a question-answering (QA)-based approach that additionally inputs the target attribute as a query to extract its values, and confirmed its advantage over a classical approach based on named-entity recognition (NER) on real-word e-commerce datasets. In this study, we argue the scalability of the NER-based approach compared to the QA-based approach, since researchers have compared BERT-based QA-based models to only a weak BiLSTM-based NER baseline trained from scratch in terms of only accuracy on datasets designed to evaluate the QA-based approach. Experimental results using a publicly available real-word dataset revealed that, under a fair setting, BERT-based NER models rival BERT-based QA models in terms of the accuracy, and their inference is faster than the QA model that processes the same product text several times to handle multiple target attributes.",
}
| The key challenge in the attribute-value extraction (AVE) task from e-commerce sites is the scalability to diverse attributes for a large number of products in real-world e-commerce sites. To make AVE scalable to diverse attributes, recent researchers adopted a question-answering (QA)-based approach that additionally inputs the target attribute as a query to extract its values, and confirmed its advantage over a classical approach based on named-entity recognition (NER) on real-word e-commerce datasets. In this study, we argue the scalability of the NER-based approach compared to the QA-based approach, since researchers have compared BERT-based QA-based models to only a weak BiLSTM-based NER baseline trained from scratch in terms of only accuracy on datasets designed to evaluate the QA-based approach. Experimental results using a publicly available real-word dataset revealed that, under a fair setting, BERT-based NER models rival BERT-based QA models in terms of the accuracy, and their inference is faster than the QA model that processes the same product text several times to handle multiple target attributes. | [
"Chen, Wei-Te",
"Shinzato, Keiji",
"Yoshinaga, Naoki",
"Xia, Y",
"i"
] | Does Named Entity Recognition Truly Not Scale Up to Real-world Product Attribute Extraction? | emnlp-industry.16 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.17.bib | https://aclanthology.org/2023.emnlp-industry.17/ | @inproceedings{zhao-etal-2023-investigating,
title = "Investigating Table-to-Text Generation Capabilities of Large Language Models in Real-World Information Seeking Scenarios",
author = "Zhao, Yilun and
Zhang, Haowei and
Si, Shengyun and
Nan, Linyong and
Tang, Xiangru and
Cohan, Arman",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.17",
doi = "10.18653/v1/2023.emnlp-industry.17",
pages = "160--175",
abstract = "Tabular data is prevalent across various industries, necessitating significant time and effort for users to understand and manipulate for their information-seeking purposes. The advancements in large language models (LLMs) have shown enormous potential to improve user efficiency. However, the adoption of LLMs in real-world applications for table information seeking remains underexplored. In this paper, we investigate the table-to-text capabilities of different LLMs using four datasets within two real-world information seeking scenarios. These include the LogicNLG and our newly-constructed LoTNLG datasets for data insight generation, along with the FeTaQA and our newly-constructed F2WTQ datasets for query-based generation. We structure our investigation around three research questions, evaluating the performance of LLMs in table-to-text generation, automated evaluation, and feedback generation, respectively. Experimental results indicate that the current high-performing LLM, specifically GPT-4, can effectively serve as a table-to-text generator, evaluator, and feedback generator, facilitating users{'} information seeking purposes in real-world scenarios. However, a significant performance gap still exists between other open-sourced LLMs (e.g., Vicuna and LLaMA-2) and GPT-4 models. Our data and code are publicly available at https://github.com/yale-nlp/LLM-T2T.",
}
| Tabular data is prevalent across various industries, necessitating significant time and effort for users to understand and manipulate for their information-seeking purposes. The advancements in large language models (LLMs) have shown enormous potential to improve user efficiency. However, the adoption of LLMs in real-world applications for table information seeking remains underexplored. In this paper, we investigate the table-to-text capabilities of different LLMs using four datasets within two real-world information seeking scenarios. These include the LogicNLG and our newly-constructed LoTNLG datasets for data insight generation, along with the FeTaQA and our newly-constructed F2WTQ datasets for query-based generation. We structure our investigation around three research questions, evaluating the performance of LLMs in table-to-text generation, automated evaluation, and feedback generation, respectively. Experimental results indicate that the current high-performing LLM, specifically GPT-4, can effectively serve as a table-to-text generator, evaluator, and feedback generator, facilitating users{'} information seeking purposes in real-world scenarios. However, a significant performance gap still exists between other open-sourced LLMs (e.g., Vicuna and LLaMA-2) and GPT-4 models. Our data and code are publicly available at https://github.com/yale-nlp/LLM-T2T. | [
"Zhao, Yilun",
"Zhang, Haowei",
"Si, Shengyun",
"Nan, Linyong",
"Tang, Xiangru",
"Cohan, Arman"
] | Investigating Table-to-Text Generation Capabilities of Large Language Models in Real-World Information Seeking Scenarios | emnlp-industry.17 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.18.bib | https://aclanthology.org/2023.emnlp-industry.18/ | @inproceedings{hu-etal-2023-tmid,
title = "{TMID}: A Comprehensive Real-world Dataset for Trademark Infringement Detection in {E}-Commerce",
author = "Hu, Tongxin and
Li, Zhuang and
Jin, Xin and
Qu, Lizhen and
Zhang, Xin",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.18",
doi = "10.18653/v1/2023.emnlp-industry.18",
pages = "176--184",
abstract = "Annually, e-commerce platforms incur substantial financial losses due to trademark infringements, making it crucial to identify and mitigate potential legal risks tied to merchant information registered to the platforms. However, the absence of high-quality datasets hampers research in this area. To address this gap, our study introduces TMID, a novel dataset to detect trademark infringement in merchant registrations. This is a real-world dataset sourced directly from Alipay, one of the world{'}s largest e-commerce and digital payment platforms. As infringement detection is a legal reasoning task requiring an understanding of the contexts and legal rules, we offer a thorough collection of legal rules and merchant and trademark-related contextual information with annotations from legal experts. We ensure the data quality by performing an extensive statistical analysis. Furthermore, we conduct an empirical study on this dataset to highlight its value and the key challenges. Through this study, we aim to contribute valuable resources to advance research into legal compliance related to trademark infringement within the e-commerce sphere.",
}
| Annually, e-commerce platforms incur substantial financial losses due to trademark infringements, making it crucial to identify and mitigate potential legal risks tied to merchant information registered to the platforms. However, the absence of high-quality datasets hampers research in this area. To address this gap, our study introduces TMID, a novel dataset to detect trademark infringement in merchant registrations. This is a real-world dataset sourced directly from Alipay, one of the world{'}s largest e-commerce and digital payment platforms. As infringement detection is a legal reasoning task requiring an understanding of the contexts and legal rules, we offer a thorough collection of legal rules and merchant and trademark-related contextual information with annotations from legal experts. We ensure the data quality by performing an extensive statistical analysis. Furthermore, we conduct an empirical study on this dataset to highlight its value and the key challenges. Through this study, we aim to contribute valuable resources to advance research into legal compliance related to trademark infringement within the e-commerce sphere. | [
"Hu, Tongxin",
"Li, Zhuang",
"Jin, Xin",
"Qu, Lizhen",
"Zhang, Xin"
] | TMID: A Comprehensive Real-world Dataset for Trademark Infringement Detection in E-Commerce | emnlp-industry.18 | 2312.05103 | [
"https://github.com/emnlptmid/emnlptmid.github.io"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.19.bib | https://aclanthology.org/2023.emnlp-industry.19/ | @inproceedings{liu-etal-2023-joint,
title = "Joint Dialogue Topic Segmentation and Categorization: A Case Study on Clinical Spoken Conversations",
author = "Liu, Zhengyuan and
Md Salleh, Siti Umairah and
Oh, Hong Choon and
Krishnaswamy, Pavitra and
Chen, Nancy",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.19",
doi = "10.18653/v1/2023.emnlp-industry.19",
pages = "185--193",
abstract = "Utilizing natural language processing techniques in clinical conversations is effective to improve the efficiency of health management workflows for medical staff and patients. Dialogue segmentation and topic categorization are two fundamental steps for processing verbose spoken conversations and highlighting informative spans for downstream tasks. However, in practical use cases, due to the variety of segmentation granularity and topic definition, and the lack of diverse annotated corpora, no generic models are readily applicable for domain-specific applications. In this work, we introduce and adopt a joint model for dialogue segmentation and topic categorization, and conduct a case study on healthcare follow-up calls for diabetes management; we provide insights from both data and model perspectives toward performance and robustness.",
}
| Utilizing natural language processing techniques in clinical conversations is effective to improve the efficiency of health management workflows for medical staff and patients. Dialogue segmentation and topic categorization are two fundamental steps for processing verbose spoken conversations and highlighting informative spans for downstream tasks. However, in practical use cases, due to the variety of segmentation granularity and topic definition, and the lack of diverse annotated corpora, no generic models are readily applicable for domain-specific applications. In this work, we introduce and adopt a joint model for dialogue segmentation and topic categorization, and conduct a case study on healthcare follow-up calls for diabetes management; we provide insights from both data and model perspectives toward performance and robustness. | [
"Liu, Zhengyuan",
"Md Salleh, Siti Umairah",
"Oh, Hong Choon",
"Krishnaswamy, Pavitra",
"Chen, Nancy"
] | Joint Dialogue Topic Segmentation and Categorization: A Case Study on Clinical Spoken Conversations | emnlp-industry.19 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.20.bib | https://aclanthology.org/2023.emnlp-industry.20/ | @inproceedings{wang-etal-2023-adapterdistillation,
title = "{A}dapter{D}istillation: Non-Destructive Task Composition with Knowledge Distillation",
author = "Wang, Junjie and
Chen, Yicheng and
Zhang, Wangshu and
Hu, Sen and
Xu, Teng and
Zheng, Jing",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.20",
doi = "10.18653/v1/2023.emnlp-industry.20",
pages = "194--201",
abstract = "Leveraging knowledge from multiple tasks through introducing a small number of task specific parameters into each transformer layer, also known as adapters, receives much attention recently. However, adding an extra fusion layer to implement knowledge composition not only increases the inference time but also is non-scalable for some applications. To avoid these issues, we propose a two-stage knowledge distillation algorithm called AdapterDistillation. In the first stage, we extract task specific knowledge by using local data to train a student adapter. In the second stage, we distill the knowledge from the existing teacher adapters into the student adapter to help its inference. Extensive experiments on frequently asked question retrieval in task-oriented dialog systems validate the efficiency of AdapterDistillation. We show that AdapterDistillation outperforms existing algorithms in terms of accuracy, resource consumption and inference time.",
}
| Leveraging knowledge from multiple tasks through introducing a small number of task specific parameters into each transformer layer, also known as adapters, receives much attention recently. However, adding an extra fusion layer to implement knowledge composition not only increases the inference time but also is non-scalable for some applications. To avoid these issues, we propose a two-stage knowledge distillation algorithm called AdapterDistillation. In the first stage, we extract task specific knowledge by using local data to train a student adapter. In the second stage, we distill the knowledge from the existing teacher adapters into the student adapter to help its inference. Extensive experiments on frequently asked question retrieval in task-oriented dialog systems validate the efficiency of AdapterDistillation. We show that AdapterDistillation outperforms existing algorithms in terms of accuracy, resource consumption and inference time. | [
"Wang, Junjie",
"Chen, Yicheng",
"Zhang, Wangshu",
"Hu, Sen",
"Xu, Teng",
"Zheng, Jing"
] | AdapterDistillation: Non-Destructive Task Composition with Knowledge Distillation | emnlp-industry.20 | 2312.16261 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.21.bib | https://aclanthology.org/2023.emnlp-industry.21/ | @inproceedings{wang-etal-2023-prominet,
title = "{PROMINET}: Prototype-based Multi-View Network for Interpretable Email Response Prediction",
author = "Wang, Yuqing and
Vijayaraghavan, Prashanth and
Degan, Ehsan",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.21",
doi = "10.18653/v1/2023.emnlp-industry.21",
pages = "202--215",
abstract = "Email is a widely used tool for business communication, and email marketing has emerged as a cost-effective strategy for enterprises. While previous studies have examined factors affecting email marketing performance, limited research has focused on understanding email response behavior by considering email content and metadata. This study proposes a Prototype-based Multi-view Network (PROMINET) that incorporates semantic and structural information from email data. By utilizing prototype learning, the PROMINET model generates latent exemplars, enabling interpretable email response prediction. The model maps learned semantic and structural exemplars to observed samples in the training data at different levels of granularity, such as document, sentence, or phrase. The approach is evaluated on two real-world email datasets: the Enron corpus and an in-house Email Marketing corpus. Experimental results demonstrate that the PROMINET model outperforms baseline models, achieving a {\textasciitilde}3{\%} improvement in F1 score on both datasets. Additionally, the model provides interpretability through prototypes at different granularity levels while maintaining comparable performance to non-interpretable models. The learned prototypes also show potential for generating suggestions to enhance email text editing and improve the likelihood of effective email responses. This research contributes to enhancing sender-receiver communication and customer engagement in email interactions.",
}
| Email is a widely used tool for business communication, and email marketing has emerged as a cost-effective strategy for enterprises. While previous studies have examined factors affecting email marketing performance, limited research has focused on understanding email response behavior by considering email content and metadata. This study proposes a Prototype-based Multi-view Network (PROMINET) that incorporates semantic and structural information from email data. By utilizing prototype learning, the PROMINET model generates latent exemplars, enabling interpretable email response prediction. The model maps learned semantic and structural exemplars to observed samples in the training data at different levels of granularity, such as document, sentence, or phrase. The approach is evaluated on two real-world email datasets: the Enron corpus and an in-house Email Marketing corpus. Experimental results demonstrate that the PROMINET model outperforms baseline models, achieving a {\textasciitilde}3{\%} improvement in F1 score on both datasets. Additionally, the model provides interpretability through prototypes at different granularity levels while maintaining comparable performance to non-interpretable models. The learned prototypes also show potential for generating suggestions to enhance email text editing and improve the likelihood of effective email responses. This research contributes to enhancing sender-receiver communication and customer engagement in email interactions. | [
"Wang, Yuqing",
"Vijayaraghavan, Prashanth",
"Degan, Ehsan"
] | PROMINET: Prototype-based Multi-View Network for Interpretable Email Response Prediction | emnlp-industry.21 | 2310.16753 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.22.bib | https://aclanthology.org/2023.emnlp-industry.22/ | @inproceedings{chiu-2023-retrieval,
title = "Retrieval-Enhanced Dual Encoder Training for Product Matching",
author = "Chiu, Justin",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.22",
doi = "10.18653/v1/2023.emnlp-industry.22",
pages = "216--222",
abstract = "Product matching is the task of matching a seller-listed item to an appropriate product. It is a critical task for an e-commerce platform, and the approach needs to be efficient to run in a large-scale setting. A dual encoder approach has been a common practice for product matching recently, due to its high performance and computation efficiency. In this paper, we propose a two-stage training for the dual encoder model. Stage 1 trained a dual encoder to identify the more informative training data. Stage 2 then train on the more informative data to get a better dual encoder model. This technique is a learned approach for building training data. We evaluate the retrieval-enhanced training on two different datasets: a publicly available Large-Scale Product Matching dataset and a real-world e-commerce dataset containing 47 million products. Experiment results show that our approach improved by 2{\%} F1 on the public dataset and 9{\%} F1 on the real-world e-commerce dataset.",
}
| Product matching is the task of matching a seller-listed item to an appropriate product. It is a critical task for an e-commerce platform, and the approach needs to be efficient to run in a large-scale setting. A dual encoder approach has been a common practice for product matching recently, due to its high performance and computation efficiency. In this paper, we propose a two-stage training for the dual encoder model. Stage 1 trained a dual encoder to identify the more informative training data. Stage 2 then train on the more informative data to get a better dual encoder model. This technique is a learned approach for building training data. We evaluate the retrieval-enhanced training on two different datasets: a publicly available Large-Scale Product Matching dataset and a real-world e-commerce dataset containing 47 million products. Experiment results show that our approach improved by 2{\%} F1 on the public dataset and 9{\%} F1 on the real-world e-commerce dataset. | [
"Chiu, Justin"
] | Retrieval-Enhanced Dual Encoder Training for Product Matching | emnlp-industry.22 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.23.bib | https://aclanthology.org/2023.emnlp-industry.23/ | @inproceedings{he-etal-2023-wordart,
title = "{W}ord{A}rt Designer: User-Driven Artistic Typography Synthesis using Large Language Models",
author = "He, Jun-Yan and
Cheng, Zhi-Qi and
Li, Chenyang and
Sun, Jingdong and
Xiang, Wangmeng and
Lin, Xianhui and
Kang, Xiaoyang and
Jin, Zengke and
Hu, Yusen and
Luo, Bin and
Geng, Yifeng and
Xie, Xuansong",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.23",
doi = "10.18653/v1/2023.emnlp-industry.23",
pages = "223--232",
abstract = "This paper introduces WordArt Designer, a user-driven framework for artistic typography synthesis, relying on the Large Language Model (LLM). The system incorporates four key modules: the LLM Engine, SemTypo, StyTypo, and TexTypo modules. 1) The LLM Engine, empowered by the LLM (e.g. GPT-3.5), interprets user inputs and generates actionable prompts for the other modules, thereby transforming abstract concepts into tangible designs. 2) The SemTypo module optimizes font designs using semantic concepts, striking a balance between artistic transformation and readability. 3) Building on the semantic layout provided by the SemTypo module, the StyTypo module creates smooth, refined images. 4) The TexTypo module further enhances the design{'}s aesthetics through texture rendering, enabling the generation of inventive textured fonts. Notably, WordArt Designer highlights the fusion of generative AI with artistic typography. Experience its capabilities on ModelScope: https://www.modelscope.cn/studios/WordArt/WordArt.",
}
| This paper introduces WordArt Designer, a user-driven framework for artistic typography synthesis, relying on the Large Language Model (LLM). The system incorporates four key modules: the LLM Engine, SemTypo, StyTypo, and TexTypo modules. 1) The LLM Engine, empowered by the LLM (e.g. GPT-3.5), interprets user inputs and generates actionable prompts for the other modules, thereby transforming abstract concepts into tangible designs. 2) The SemTypo module optimizes font designs using semantic concepts, striking a balance between artistic transformation and readability. 3) Building on the semantic layout provided by the SemTypo module, the StyTypo module creates smooth, refined images. 4) The TexTypo module further enhances the design{'}s aesthetics through texture rendering, enabling the generation of inventive textured fonts. Notably, WordArt Designer highlights the fusion of generative AI with artistic typography. Experience its capabilities on ModelScope: https://www.modelscope.cn/studios/WordArt/WordArt. | [
"He, Jun-Yan",
"Cheng, Zhi-Qi",
"Li, Chenyang",
"Sun, Jingdong",
"Xiang, Wangmeng",
"Lin, Xianhui",
"Kang, Xiaoyang",
"Jin, Zengke",
"Hu, Yusen",
"Luo, Bin",
"Geng, Yifeng",
"Xie, Xuansong"
] | WordArt Designer: User-Driven Artistic Typography Synthesis using Large Language Models | emnlp-industry.23 | 2310.18332 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.24.bib | https://aclanthology.org/2023.emnlp-industry.24/ | @inproceedings{kaji-2023-lattice,
title = "Lattice Path Edit Distance: A {R}omanization-aware Edit Distance for Extracting Misspelling-Correction Pairs from {J}apanese Search Query Logs",
author = "Kaji, Nobuhiro",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.24",
doi = "10.18653/v1/2023.emnlp-industry.24",
pages = "233--242",
abstract = "Edit distance has been successfully used to extract training data, i.e., misspelling-correction pairs, of spelling correction models from search query logs in languages including English. However, the success does not readily apply to Japanese, where misspellings are often dissimilar to correct spellings due to the romanization-based input methods. To address this problem, we introduce lattice path edit distance, which utilizes romanization lattices to efficiently consider all possible romanized forms of input strings. Empirical experiments using Japanese search query logs demonstrated that the lattice path edit distance outperformed baseline methods including the standard edit distance combined with an existing transliterator and morphological analyzer. A training data collection pipeline that uses the lattice path edit distance has been deployed in production at our search engine for over a year.",
}
| Edit distance has been successfully used to extract training data, i.e., misspelling-correction pairs, of spelling correction models from search query logs in languages including English. However, the success does not readily apply to Japanese, where misspellings are often dissimilar to correct spellings due to the romanization-based input methods. To address this problem, we introduce lattice path edit distance, which utilizes romanization lattices to efficiently consider all possible romanized forms of input strings. Empirical experiments using Japanese search query logs demonstrated that the lattice path edit distance outperformed baseline methods including the standard edit distance combined with an existing transliterator and morphological analyzer. A training data collection pipeline that uses the lattice path edit distance has been deployed in production at our search engine for over a year. | [
"Kaji, Nobuhiro"
] | Lattice Path Edit Distance: A Romanization-aware Edit Distance for Extracting Misspelling-Correction Pairs from Japanese Search Query Logs | emnlp-industry.24 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.25.bib | https://aclanthology.org/2023.emnlp-industry.25/ | @inproceedings{gao-etal-2023-learning-multilingual,
title = "Learning Multilingual Sentence Representations with Cross-lingual Consistency Regularization",
author = "Gao, Pengzhi and
Zhang, Liwen and
He, Zhongjun and
Wu, Hua and
Wang, Haifeng",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.25",
doi = "10.18653/v1/2023.emnlp-industry.25",
pages = "243--262",
abstract = "Multilingual sentence representations are the foundation for similarity-based bitext mining, which is crucial for scaling multilingual neural machine translation (NMT) system to more languages. In this paper, we introduce MuSR: a one-for-all Multilingual Sentence Representation model that supports 223 languages. Leveraging billions of English-centric parallel corpora, we train a multilingual Transformer encoder, coupled with an auxiliary Transformer decoder, by adopting a multilingual NMT framework with CrossConST, a cross-lingual consistency regularization technique proposed in Gao et al. (2023). Experimental results on multilingual similarity search and bitext mining tasks show the effectiveness of our approach. Specifically, MuSR achieves superior performance over LASER3 (Heffernan et al., 2022) which consists of 148 independent multilingual sentence encoders.",
}
| Multilingual sentence representations are the foundation for similarity-based bitext mining, which is crucial for scaling multilingual neural machine translation (NMT) system to more languages. In this paper, we introduce MuSR: a one-for-all Multilingual Sentence Representation model that supports 223 languages. Leveraging billions of English-centric parallel corpora, we train a multilingual Transformer encoder, coupled with an auxiliary Transformer decoder, by adopting a multilingual NMT framework with CrossConST, a cross-lingual consistency regularization technique proposed in Gao et al. (2023). Experimental results on multilingual similarity search and bitext mining tasks show the effectiveness of our approach. Specifically, MuSR achieves superior performance over LASER3 (Heffernan et al., 2022) which consists of 148 independent multilingual sentence encoders. | [
"Gao, Pengzhi",
"Zhang, Liwen",
"He, Zhongjun",
"Wu, Hua",
"Wang, Haifeng"
] | Learning Multilingual Sentence Representations with Cross-lingual Consistency Regularization | emnlp-industry.25 | 2306.06919 | [
"https://github.com/gpengzhi/crossconst-sr"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.26.bib | https://aclanthology.org/2023.emnlp-industry.26/ | @inproceedings{van-dorpe-etal-2023-unveiling,
title = "Unveiling Identity Biases in Toxicity Detection : A Game-Focused Dataset and Reactivity Analysis Approach",
author = "Van Dorpe, Josiane and
Yang, Zachary and
Grenon-Godbout, Nicolas and
Winterstein, Gr{\'e}goire",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.26",
doi = "10.18653/v1/2023.emnlp-industry.26",
pages = "263--274",
abstract = "Identity biases arise commonly from annotated datasets, can be propagated in language models and can cause further harm to marginal groups. Existing bias benchmarking datasets are mainly focused on gender or racial biases and are made to pinpoint which class the model is biased towards. They also are not designed for the gaming industry, a concern for models built for toxicity detection in videogames{'} chat. We propose a dataset and a method to highlight oversensitive terms using reactivity analysis and the model{'}s performance. We test our dataset against ToxBuster, a language model developed by Ubisoft fine-tuned for toxicity detection on multiplayer videogame{'}s written chat, and Perspective API. We find that these toxicity models often automatically tag terms related to a community{'}s identity as toxic, which prevents members of already marginalized groups to make their presence known or have a mature / normal conversation. Through this process, we have generated an interesting list of terms that trigger the models to varying degrees, along with insights on establishing a baseline through human annotations.",
}
| Identity biases arise commonly from annotated datasets, can be propagated in language models and can cause further harm to marginal groups. Existing bias benchmarking datasets are mainly focused on gender or racial biases and are made to pinpoint which class the model is biased towards. They also are not designed for the gaming industry, a concern for models built for toxicity detection in videogames{'} chat. We propose a dataset and a method to highlight oversensitive terms using reactivity analysis and the model{'}s performance. We test our dataset against ToxBuster, a language model developed by Ubisoft fine-tuned for toxicity detection on multiplayer videogame{'}s written chat, and Perspective API. We find that these toxicity models often automatically tag terms related to a community{'}s identity as toxic, which prevents members of already marginalized groups to make their presence known or have a mature / normal conversation. Through this process, we have generated an interesting list of terms that trigger the models to varying degrees, along with insights on establishing a baseline through human annotations. | [
"Van Dorpe, Josiane",
"Yang, Zachary",
"Grenon-Godbout, Nicolas",
"Winterstein, Gr{\\'e}goire"
] | Unveiling Identity Biases in Toxicity Detection : A Game-Focused Dataset and Reactivity Analysis Approach | emnlp-industry.26 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.27.bib | https://aclanthology.org/2023.emnlp-industry.27/ | @inproceedings{lin-etal-2023-orange,
title = "{ORANGE}: Text-video Retrieval via Watch-time-aware Heterogeneous Graph Contrastive Learning",
author = "Lin, Yucheng and
Chang, Tim and
Chang, Yaning and
Ma, Jianqiang and
Li, Donghui and
Peng, Ting and
Li, Zang and
Zhou, Zhiyi and
Wang, Feng",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.27",
doi = "10.18653/v1/2023.emnlp-industry.27",
pages = "275--283",
abstract = "With the explosive growth of short-video data on industrial video-sharing platforms such as TikTok and YouTube, text-video retrieval techniques have become increasingly important. Most existing works for text-video retrieval focus on designing informative representation learning methods and delicate matching mechanisms, which leverage the content information of queries and videos themselves (i.e., textual information of queries and multimodal information of videos). However, real-world scenarios often involve brief, ambiguous queries and low-quality videos, making content-based retrieval less effective. In order to accommodate various search requirements and enhance user satisfaction, this study introduces a novel Text-video Retrieval method via Watch-time-aware Heterogeneous Graph Contrastive Learning (termed ORANGE). This approach aims to learn informative embeddings for queries and videos by leveraging both content information and the abundant relational information present in video-search scenarios. Specifically, we first construct a heterogeneous information graph where nodes represent domain objects (e.g., query, video, tag) and edges represent rich relations among these objects. Afterwards, a meta-path-guided heterogeneous graph attention encoder with the awareness of video watch time is devised to encode various semantic aspects of query and video nodes. To train our model, we introduce a meta-path-wise contrastive learning paradigm that facilitates capturing dependencies across multiple semantic relations, thereby enhancing the obtained embeddings. Finally, when deployed online, for new queries non-existent in the constructed graph, a bert-based query encoder distilled from our ORANGE is employed. Offline experiments conducted on a real-world dataset demonstrate the effectiveness of our ORANGE. Moreover, it has been implemented in the matching stage of an industrial online video-search service, where it exhibited statistically significant improvements over the online baseline in an A/B test.",
}
| With the explosive growth of short-video data on industrial video-sharing platforms such as TikTok and YouTube, text-video retrieval techniques have become increasingly important. Most existing works for text-video retrieval focus on designing informative representation learning methods and delicate matching mechanisms, which leverage the content information of queries and videos themselves (i.e., textual information of queries and multimodal information of videos). However, real-world scenarios often involve brief, ambiguous queries and low-quality videos, making content-based retrieval less effective. In order to accommodate various search requirements and enhance user satisfaction, this study introduces a novel Text-video Retrieval method via Watch-time-aware Heterogeneous Graph Contrastive Learning (termed ORANGE). This approach aims to learn informative embeddings for queries and videos by leveraging both content information and the abundant relational information present in video-search scenarios. Specifically, we first construct a heterogeneous information graph where nodes represent domain objects (e.g., query, video, tag) and edges represent rich relations among these objects. Afterwards, a meta-path-guided heterogeneous graph attention encoder with the awareness of video watch time is devised to encode various semantic aspects of query and video nodes. To train our model, we introduce a meta-path-wise contrastive learning paradigm that facilitates capturing dependencies across multiple semantic relations, thereby enhancing the obtained embeddings. Finally, when deployed online, for new queries non-existent in the constructed graph, a bert-based query encoder distilled from our ORANGE is employed. Offline experiments conducted on a real-world dataset demonstrate the effectiveness of our ORANGE. Moreover, it has been implemented in the matching stage of an industrial online video-search service, where it exhibited statistically significant improvements over the online baseline in an A/B test. | [
"Lin, Yucheng",
"Chang, Tim",
"Chang, Yaning",
"Ma, Jianqiang",
"Li, Donghui",
"Peng, Ting",
"Li, Zang",
"Zhou, Zhiyi",
"Wang, Feng"
] | ORANGE: Text-video Retrieval via Watch-time-aware Heterogeneous Graph Contrastive Learning | emnlp-industry.27 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.28.bib | https://aclanthology.org/2023.emnlp-industry.28/ | @inproceedings{hidey-sarthak-2023-compute,
title = "Compute-Efficient Churn Reduction for Conversational Agents",
author = "Hidey, Christopher and
Sarthak, Sarthak",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.28",
doi = "10.18653/v1/2023.emnlp-industry.28",
pages = "284--293",
abstract = "Model churn occurs when re-training a model yields different predictions despite using the same data and hyper-parameters. Churn reduction is crucial for industry conversational systems where users expect consistent results for the same queries. In this setting, compute resources are often limited due to latency requirements during serving and overall time constraints during re-training. To address this issue, we propose a compute-efficient method that mitigates churn without requiring extra resources for training or inference. Our approach involves a lightweight data pre-processing step that pairs semantic parses based on their {``}function call signature{''} and encourages similarity through an additional loss based on Jensen-Shannon Divergence. We validate the effectiveness of our method in three scenarios: academic (+3.93 percent improvement on average in a churn reduction metric), simulated noisy data (+8.09), and industry (+5.28) settings.",
}
| Model churn occurs when re-training a model yields different predictions despite using the same data and hyper-parameters. Churn reduction is crucial for industry conversational systems where users expect consistent results for the same queries. In this setting, compute resources are often limited due to latency requirements during serving and overall time constraints during re-training. To address this issue, we propose a compute-efficient method that mitigates churn without requiring extra resources for training or inference. Our approach involves a lightweight data pre-processing step that pairs semantic parses based on their {``}function call signature{''} and encourages similarity through an additional loss based on Jensen-Shannon Divergence. We validate the effectiveness of our method in three scenarios: academic (+3.93 percent improvement on average in a churn reduction metric), simulated noisy data (+8.09), and industry (+5.28) settings. | [
"Hidey, Christopher",
"Sarthak, Sarthak"
] | Compute-Efficient Churn Reduction for Conversational Agents | emnlp-industry.28 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.29.bib | https://aclanthology.org/2023.emnlp-industry.29/ | @inproceedings{yang-etal-2023-empower,
title = "Empower Large Language Model to Perform Better on Industrial Domain-Specific Question Answering",
author = "Yang, Fangkai and
Zhao, Pu and
Wang, Zezhong and
Wang, Lu and
Qiao, Bo and
Zhang, Jue and
Garg, Mohit and
Lin, Qingwei and
Rajmohan, Saravan and
Zhang, Dongmei",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.29",
doi = "10.18653/v1/2023.emnlp-industry.29",
pages = "294--312",
abstract = "Large Language Model (LLM) has gained popularity and achieved remarkable results in open-domain tasks, but its performance in real industrial domain-specific scenarios is average due to its lack of specific domain knowledge. This issue has attracted widespread attention, but there are few relevant benchmarks available. In this paper, we provide a benchmark Question Answering (QA) dataset named MSQA, centered around Microsoft products and IT technical problems encountered by customers. This dataset contains industry cloud-specific QA knowledge, an area not extensively covered in general LLMs, making it well-suited for evaluating methods aiming to enhance LLMs{'} domain-specific capabilities. In addition, we propose a new model interaction paradigm that can empower LLM to achieve better performance on domain-specific tasks where it is not proficient. Extensive experiments demonstrate that the approach following our method outperforms the commonly used LLM with retrieval methods. We make our source code and sample data available at: https://aka.ms/Microsoft{\_}QA.",
}
| Large Language Model (LLM) has gained popularity and achieved remarkable results in open-domain tasks, but its performance in real industrial domain-specific scenarios is average due to its lack of specific domain knowledge. This issue has attracted widespread attention, but there are few relevant benchmarks available. In this paper, we provide a benchmark Question Answering (QA) dataset named MSQA, centered around Microsoft products and IT technical problems encountered by customers. This dataset contains industry cloud-specific QA knowledge, an area not extensively covered in general LLMs, making it well-suited for evaluating methods aiming to enhance LLMs{'} domain-specific capabilities. In addition, we propose a new model interaction paradigm that can empower LLM to achieve better performance on domain-specific tasks where it is not proficient. Extensive experiments demonstrate that the approach following our method outperforms the commonly used LLM with retrieval methods. We make our source code and sample data available at: https://aka.ms/Microsoft{\_}QA. | [
"Yang, Fangkai",
"Zhao, Pu",
"Wang, Zezhong",
"Wang, Lu",
"Qiao, Bo",
"Zhang, Jue",
"Garg, Mohit",
"Lin, Qingwei",
"Rajmohan, Saravan",
"Zhang, Dongmei"
] | Empower Large Language Model to Perform Better on Industrial Domain-Specific Question Answering | emnlp-industry.29 | 2305.11541 | [
"https://github.com/keanudicap/MSQA"
] | https://huggingface.co/papers/2305.11541 | 2 | 1 | 1 | 8 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.30.bib | https://aclanthology.org/2023.emnlp-industry.30/ | @inproceedings{li-etal-2023-enhancing-extreme,
title = "Enhancing Extreme Multi-Label Text Classification: Addressing Challenges in Model, Data, and Evaluation",
author = "Li, Dan and
Zhu, Zi Long and
van de Loo, Janneke and
Masip Gomez, Agnes and
Yadav, Vikrant and
Tsatsaronis, Georgios and
Afzal, Zubair",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.30",
doi = "10.18653/v1/2023.emnlp-industry.30",
pages = "313--321",
abstract = "Extreme multi-label text classification is a prevalent task in industry, but it frequently encounters challenges in terms of machine learning perspectives, including model limitations, data scarcity, and time-consuming evaluation. This paper aims to mitigate these issues by introducing novel approaches. Firstly, we propose a label ranking model as an alternative to the conventional SciBERT-based classification model, enabling efficient handling of large-scale labels and accommodating new labels. Secondly, we present an active learning-based pipeline that addresses the data scarcity of new labels during the update of a classification system. Finally, we introduce ChatGPT to assist with model evaluation. Our experiments demonstrate the effectiveness of these techniques in enhancing the extreme multi-label text classification task.",
}
| Extreme multi-label text classification is a prevalent task in industry, but it frequently encounters challenges in terms of machine learning perspectives, including model limitations, data scarcity, and time-consuming evaluation. This paper aims to mitigate these issues by introducing novel approaches. Firstly, we propose a label ranking model as an alternative to the conventional SciBERT-based classification model, enabling efficient handling of large-scale labels and accommodating new labels. Secondly, we present an active learning-based pipeline that addresses the data scarcity of new labels during the update of a classification system. Finally, we introduce ChatGPT to assist with model evaluation. Our experiments demonstrate the effectiveness of these techniques in enhancing the extreme multi-label text classification task. | [
"Li, Dan",
"Zhu, Zi Long",
"van de Loo, Janneke",
"Masip Gomez, Agnes",
"Yadav, Vikrant",
"Tsatsaronis, Georgios",
"Afzal, Zubair"
] | Enhancing Extreme Multi-Label Text Classification: Addressing Challenges in Model, Data, and Evaluation | emnlp-industry.30 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.31.bib | https://aclanthology.org/2023.emnlp-industry.31/ | @inproceedings{ye-etal-2023-query,
title = "Query-aware Multi-modal based Ranking Relevance in Video Search",
author = "Ye, Chengcan and
Peng, Ting and
Chang, Tim and
Zhou, Zhiyi and
Wang, Feng",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.31",
doi = "10.18653/v1/2023.emnlp-industry.31",
pages = "322--330",
abstract = "Relevance ranking system plays a crucial role in video search on streaming platforms. Most relevance ranking methods focus on text modality, incapable of fully exploiting cross-modal cues present in video. Recent multi-modal models have demonstrated promise in various vision-language tasks but provide limited help for downstream query-video relevance tasks due to the discrepency between relevance ranking-agnostic pre-training objectives and the real video search scenarios that demand comprehensive relevance modeling. To address these challenges, we propose a QUery-Aware pre-training model with multi-modaLITY (QUALITY) that incorporates hard-mined query information as alignment targets and utilizes video tag information for guidance. QUALITY is integrated into our relevance ranking model, which leverages multi-modal knowledge and improves ranking optimization method based on ordinal regression. Extensive experiments show our proposed model significantly enhances video search performance.",
}
| Relevance ranking system plays a crucial role in video search on streaming platforms. Most relevance ranking methods focus on text modality, incapable of fully exploiting cross-modal cues present in video. Recent multi-modal models have demonstrated promise in various vision-language tasks but provide limited help for downstream query-video relevance tasks due to the discrepency between relevance ranking-agnostic pre-training objectives and the real video search scenarios that demand comprehensive relevance modeling. To address these challenges, we propose a QUery-Aware pre-training model with multi-modaLITY (QUALITY) that incorporates hard-mined query information as alignment targets and utilizes video tag information for guidance. QUALITY is integrated into our relevance ranking model, which leverages multi-modal knowledge and improves ranking optimization method based on ordinal regression. Extensive experiments show our proposed model significantly enhances video search performance. | [
"Ye, Chengcan",
"Peng, Ting",
"Chang, Tim",
"Zhou, Zhiyi",
"Wang, Feng"
] | Query-aware Multi-modal based Ranking Relevance in Video Search | emnlp-industry.31 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.32.bib | https://aclanthology.org/2023.emnlp-industry.32/ | @inproceedings{good-etal-2023-coordinated,
title = "Coordinated Replay Sample Selection for Continual Federated Learning",
author = "Good, Jack and
Majmudar, Jimit and
Dupuy, Christophe and
Wang, Jixuan and
Peris, Charith and
Chung, Clement and
Zemel, Richard and
Gupta, Rahul",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.32",
doi = "10.18653/v1/2023.emnlp-industry.32",
pages = "331--342",
abstract = "Continual Federated Learning (CFL) combines Federated Learning (FL), the decentralized learning of a central model on a number of client devices that may not communicate their data, and Continual Learning (CL), the learning of a model from a continual stream of data without keeping the entire history. In CL, the main challenge is forgetting what was learned from past data. While replay-based algorithms that keep a small pool of past training data are effective to reduce forgetting, only simple replay sample selection strategies have been applied to CFL in prior work, and no previous work has explored coordination among clients for better sample selection. To bridge this gap, we adapt a replay sample selection objective based on loss gradient diversity to CFL and propose a new relaxation-based selection of samples to optimize the objective. Next, we propose a practical algorithm to coordinate gradient-based replay sample selection across clients without communicating private data. We benchmark our coordinated and uncoordinated replay sample selection algorithms against random sampling-based baselines with language models trained on a large scale de-identified real-world text dataset. We show that gradient-based sample selection methods both boost performance and reduce forgetting compared to random sampling methods, with our coordination method showing gains early in the low replay size regime (when the budget for storing past data is small).",
}
| Continual Federated Learning (CFL) combines Federated Learning (FL), the decentralized learning of a central model on a number of client devices that may not communicate their data, and Continual Learning (CL), the learning of a model from a continual stream of data without keeping the entire history. In CL, the main challenge is forgetting what was learned from past data. While replay-based algorithms that keep a small pool of past training data are effective to reduce forgetting, only simple replay sample selection strategies have been applied to CFL in prior work, and no previous work has explored coordination among clients for better sample selection. To bridge this gap, we adapt a replay sample selection objective based on loss gradient diversity to CFL and propose a new relaxation-based selection of samples to optimize the objective. Next, we propose a practical algorithm to coordinate gradient-based replay sample selection across clients without communicating private data. We benchmark our coordinated and uncoordinated replay sample selection algorithms against random sampling-based baselines with language models trained on a large scale de-identified real-world text dataset. We show that gradient-based sample selection methods both boost performance and reduce forgetting compared to random sampling methods, with our coordination method showing gains early in the low replay size regime (when the budget for storing past data is small). | [
"Good, Jack",
"Majmudar, Jimit",
"Dupuy, Christophe",
"Wang, Jixuan",
"Peris, Charith",
"Chung, Clement",
"Zemel, Richard",
"Gupta, Rahul"
] | Coordinated Replay Sample Selection for Continual Federated Learning | emnlp-industry.32 | 2310.15054 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.33.bib | https://aclanthology.org/2023.emnlp-industry.33/ | @inproceedings{laskar-etal-2023-building,
title = "Building Real-World Meeting Summarization Systems using Large Language Models: A Practical Perspective",
author = "Laskar, Md Tahmid Rahman and
Fu, Xue-Yong and
Chen, Cheng and
Bhushan TN, Shashi",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.33",
doi = "10.18653/v1/2023.emnlp-industry.33",
pages = "343--352",
abstract = "This paper studies how to effectively build meeting summarization systems for real-world usage using large language models (LLMs). For this purpose, we conduct an extensive evaluation and comparison of various closed-source and open-source LLMs, namely, GPT-4, GPT-3.5, PaLM-2, and LLaMA-2. Our findings reveal that most closed-source LLMs are generally better in terms of performance. However, much smaller open-source models like LLaMA-2 (7B and 13B) could still achieve performance comparable to the large closed-source models even in zero-shot scenarios. Considering the privacy concerns of closed-source models for only being accessible via API, alongside the high cost associated with using fine-tuned versions of the closed-source models, the opensource models that can achieve competitive performance are more advantageous for industrial use. Balancing performance with associated costs and privacy concerns, the LLaMA-2-7B model looks more promising for industrial usage. In sum, this paper offers practical insights on using LLMs for real-world business meeting summarization, shedding light on the trade-offs between performance and cost.",
}
| This paper studies how to effectively build meeting summarization systems for real-world usage using large language models (LLMs). For this purpose, we conduct an extensive evaluation and comparison of various closed-source and open-source LLMs, namely, GPT-4, GPT-3.5, PaLM-2, and LLaMA-2. Our findings reveal that most closed-source LLMs are generally better in terms of performance. However, much smaller open-source models like LLaMA-2 (7B and 13B) could still achieve performance comparable to the large closed-source models even in zero-shot scenarios. Considering the privacy concerns of closed-source models for only being accessible via API, alongside the high cost associated with using fine-tuned versions of the closed-source models, the opensource models that can achieve competitive performance are more advantageous for industrial use. Balancing performance with associated costs and privacy concerns, the LLaMA-2-7B model looks more promising for industrial usage. In sum, this paper offers practical insights on using LLMs for real-world business meeting summarization, shedding light on the trade-offs between performance and cost. | [
"Laskar, Md Tahmid Rahman",
"Fu, Xue-Yong",
"Chen, Cheng",
"Bhushan TN, Shashi"
] | Building Real-World Meeting Summarization Systems using Large Language Models: A Practical Perspective | emnlp-industry.33 | 2310.19233 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.34.bib | https://aclanthology.org/2023.emnlp-industry.34/ | @inproceedings{amba-hombaiah-etal-2023-creator,
title = "Creator Context for Tweet Recommendation",
author = "Amba Hombaiah, Spurthi and
Chen, Tao and
Zhang, Mingyang and
Bendersky, Michael and
Najork, Marc and
Colen, Matt and
Levi, Sergey and
Ofitserov, Vladimir and
Amin, Tanvir",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.34",
doi = "10.18653/v1/2023.emnlp-industry.34",
pages = "353--363",
abstract = "When discussing a tweet, people usually not only refer to the content it delivers, but also to the person behind the tweet. In other words, grounding the interpretation of the tweet in the context of its creator plays an important role in deciphering the true intent and the importance of the tweet. In this paper, we attempt to answer the question of how creator context should be used to advance tweet understanding. Specifically, we investigate the usefulness of different types of creator context, and examine different model structures for incorporating creator context in tweet modeling. We evaluate our tweet understanding models on a practical use case {--} recommending relevant tweets to news articles. This use case already exists in popular news apps, and can also serve as a useful assistive tool for journalists. We discover that creator context is essential for tweet understanding, and can improve application metrics by a large margin. However, we also observe that not all creator contexts are equal. Creator context can be time sensitive and noisy. Careful creator context selection and deliberate model structure design play an important role in creator context effectiveness.",
}
| When discussing a tweet, people usually not only refer to the content it delivers, but also to the person behind the tweet. In other words, grounding the interpretation of the tweet in the context of its creator plays an important role in deciphering the true intent and the importance of the tweet. In this paper, we attempt to answer the question of how creator context should be used to advance tweet understanding. Specifically, we investigate the usefulness of different types of creator context, and examine different model structures for incorporating creator context in tweet modeling. We evaluate our tweet understanding models on a practical use case {--} recommending relevant tweets to news articles. This use case already exists in popular news apps, and can also serve as a useful assistive tool for journalists. We discover that creator context is essential for tweet understanding, and can improve application metrics by a large margin. However, we also observe that not all creator contexts are equal. Creator context can be time sensitive and noisy. Careful creator context selection and deliberate model structure design play an important role in creator context effectiveness. | [
"Amba Hombaiah, Spurthi",
"Chen, Tao",
"Zhang, Mingyang",
"Bendersky, Michael",
"Najork, Marc",
"Colen, Matt",
"Levi, Sergey",
"Ofitserov, Vladimir",
"Amin, Tanvir"
] | Creator Context for Tweet Recommendation | emnlp-industry.34 | 2311.17650 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.35.bib | https://aclanthology.org/2023.emnlp-industry.35/ | @inproceedings{vuong-etal-2023-adabert,
title = "{A}da{BERT}-{CTC}: Leveraging {BERT}-{CTC} for Text-Only Domain Adaptation in {ASR}",
author = "Vuong, Tyler and
Mundnich, Karel and
Bekal, Dhanush and
Elluru, Veera and
Ronanki, Srikanth and
Bodapati, Sravan",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.35",
doi = "10.18653/v1/2023.emnlp-industry.35",
pages = "364--371",
abstract = "End-to-end (E2E) automatic speech recognition (ASR) models are becoming increasingly popular in commercial applications, such as virtual assistants, closed captioning, and dictation systems. The accuracy of the ASR is crucial to their success. However, E2E models still struggle to recognize out-of-domain words such as proper nouns and domain-specific terms. In this paper we introduce AdaBERT-CTC, a domain adaptation technique that relies solely on textual data. Our method allows for text-only adaptation by fine-tuning a pre-trained self-supervised text encoder model. Additionally, we show that our method can be made parameter-efficient by adding bottleneck adapters to the pre-trained model. This allows for adaptation with less than a 5{\%} increase in parameters and minimal computational overhead during inference. We demonstrate that our approach outperforms the base BERT-CTC model by up to 14{\%} relative word error rate improvement on several out-of-domain, publicly available datasets.",
}
| End-to-end (E2E) automatic speech recognition (ASR) models are becoming increasingly popular in commercial applications, such as virtual assistants, closed captioning, and dictation systems. The accuracy of the ASR is crucial to their success. However, E2E models still struggle to recognize out-of-domain words such as proper nouns and domain-specific terms. In this paper we introduce AdaBERT-CTC, a domain adaptation technique that relies solely on textual data. Our method allows for text-only adaptation by fine-tuning a pre-trained self-supervised text encoder model. Additionally, we show that our method can be made parameter-efficient by adding bottleneck adapters to the pre-trained model. This allows for adaptation with less than a 5{\%} increase in parameters and minimal computational overhead during inference. We demonstrate that our approach outperforms the base BERT-CTC model by up to 14{\%} relative word error rate improvement on several out-of-domain, publicly available datasets. | [
"Vuong, Tyler",
"Mundnich, Karel",
"Bekal, Dhanush",
"Elluru, Veera",
"Ronanki, Srikanth",
"Bodapati, Sravan"
] | AdaBERT-CTC: Leveraging BERT-CTC for Text-Only Domain Adaptation in ASR | emnlp-industry.35 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.36.bib | https://aclanthology.org/2023.emnlp-industry.36/ | @inproceedings{kochedykov-etal-2023-conversing,
title = "Conversing with databases: Practical Natural Language Querying",
author = "Kochedykov, Denis and
Yin, Fenglin and
Khatravath, Sreevidya",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.36",
doi = "10.18653/v1/2023.emnlp-industry.36",
pages = "372--379",
abstract = "In this work, we designed, developed and released in production DataQue {--} a hybrid NLQ (Natural Language Querying) system for conversational DB querying. We address multiple practical problems that are not accounted for in public Text-to-SQL solutions {--} numerous complex implied conditions in user questions, jargon and abbreviations, custom calculations, non-SQL operations, a need to inject all those into pipeline fast and to have guaranteed parsing results for demanding users, cold-start problem. The DataQue processing pipeline for Text-to-SQL translation consists of 10-15 model-based and rule-based components that allows to tightly control the processing.",
}
| In this work, we designed, developed and released in production DataQue {--} a hybrid NLQ (Natural Language Querying) system for conversational DB querying. We address multiple practical problems that are not accounted for in public Text-to-SQL solutions {--} numerous complex implied conditions in user questions, jargon and abbreviations, custom calculations, non-SQL operations, a need to inject all those into pipeline fast and to have guaranteed parsing results for demanding users, cold-start problem. The DataQue processing pipeline for Text-to-SQL translation consists of 10-15 model-based and rule-based components that allows to tightly control the processing. | [
"Kochedykov, Denis",
"Yin, Fenglin",
"Khatravath, Sreevidya"
] | Conversing with databases: Practical Natural Language Querying | emnlp-industry.36 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.37.bib | https://aclanthology.org/2023.emnlp-industry.37/ | @inproceedings{radharapu-etal-2023-aart,
title = "{AART}: {AI}-Assisted Red-Teaming with Diverse Data Generation for New {LLM}-powered Applications",
author = "Radharapu, Bhaktipriya and
Robinson, Kevin and
Aroyo, Lora and
Lahoti, Preethi",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.37",
doi = "10.18653/v1/2023.emnlp-industry.37",
pages = "380--395",
abstract = "Adversarially testing large language models (LLMs) is crucial for their safe and responsible deployment in practice. We introduce an AI-assisted approach for automated generation of adversarial evaluation datasets to test the safety of LLM generations on new downstream applications. We call it AART AI-assisted Red-Teaming - an automated alternative to current manual red-teaming efforts. AART offers a data generation and augmentation pipeline of reusable and customizable recipes that reduce significantly human effort and enable integration of adversarial testing earlier in new product development. AART generates evaluation datasets with high diversity of content characteristics critical for effective adversarial testing (e.g. sensitive and harmful concepts, specific to a wide range of cultural and geographic regions and application scenarios). The data generation is steered by AI-assisted recipes to define, scope and prioritize diversity within a new application context. This feeds into a structured LLM-generation process that scales up evaluation priorities. This provides transparency of developers evaluation intentions and enables quick adaptation to new use cases and newly discovered model weaknesses. Compared to some of the state-of-the-art tools AART shows promising results in terms of concept coverage and data quality.",
}
| Adversarially testing large language models (LLMs) is crucial for their safe and responsible deployment in practice. We introduce an AI-assisted approach for automated generation of adversarial evaluation datasets to test the safety of LLM generations on new downstream applications. We call it AART AI-assisted Red-Teaming - an automated alternative to current manual red-teaming efforts. AART offers a data generation and augmentation pipeline of reusable and customizable recipes that reduce significantly human effort and enable integration of adversarial testing earlier in new product development. AART generates evaluation datasets with high diversity of content characteristics critical for effective adversarial testing (e.g. sensitive and harmful concepts, specific to a wide range of cultural and geographic regions and application scenarios). The data generation is steered by AI-assisted recipes to define, scope and prioritize diversity within a new application context. This feeds into a structured LLM-generation process that scales up evaluation priorities. This provides transparency of developers evaluation intentions and enables quick adaptation to new use cases and newly discovered model weaknesses. Compared to some of the state-of-the-art tools AART shows promising results in terms of concept coverage and data quality. | [
"Radharapu, Bhaktipriya",
"Robinson, Kevin",
"Aroyo, Lora",
"Lahoti, Preethi"
] | AART: AI-Assisted Red-Teaming with Diverse Data Generation for New LLM-powered Applications | emnlp-industry.37 | 2311.08592 | [
"https://github.com/google-research-datasets/aart-ai-safety-dataset"
] | https://huggingface.co/papers/2311.08592 | 0 | 0 | 0 | 4 | [] | [
"dynamoai/safe_eval"
] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.38.bib | https://aclanthology.org/2023.emnlp-industry.38/ | @inproceedings{kumar-etal-2023-speakerly,
title = "Speakerly: A Voice-based Writing Assistant for Text Composition",
author = "Kumar, Dhruv and
Raheja, Vipul and
Kaiser-Schatzlein, Alice and
Perry, Robyn and
Joshi, Apurva and
Hugues-Nuger, Justin and
Lou, Samuel and
Chowdhury, Navid",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.38",
doi = "10.18653/v1/2023.emnlp-industry.38",
pages = "396--407",
abstract = "We present Speakerly, a new real-time voice-based writing assistance system that helps users with text composition across various use cases such as emails, instant messages, and notes. The user can interact with the system through instructions or dictation, and the system generates a well-formatted and coherent document. We describe the system architecture and detail how we address the various challenges while building and deploying such a system at scale. More specifically, our system uses a combination of small, task-specific models as well as pre-trained language models for fast and effective text composition while supporting a variety of input modes for better usability.",
}
| We present Speakerly, a new real-time voice-based writing assistance system that helps users with text composition across various use cases such as emails, instant messages, and notes. The user can interact with the system through instructions or dictation, and the system generates a well-formatted and coherent document. We describe the system architecture and detail how we address the various challenges while building and deploying such a system at scale. More specifically, our system uses a combination of small, task-specific models as well as pre-trained language models for fast and effective text composition while supporting a variety of input modes for better usability. | [
"Kumar, Dhruv",
"Raheja, Vipul",
"Kaiser-Schatzlein, Alice",
"Perry, Robyn",
"Joshi, Apurva",
"Hugues-Nuger, Justin",
"Lou, Samuel",
"Chowdhury, Navid"
] | Speakerly: A Voice-based Writing Assistant for Text Composition | emnlp-industry.38 | 2310.16251 | [
""
] | https://huggingface.co/papers/2310.16251 | 0 | 0 | 0 | 8 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.39.bib | https://aclanthology.org/2023.emnlp-industry.39/ | @inproceedings{li-etal-2023-chatgpt,
title = "Are {C}hat{GPT} and {GPT}-4 General-Purpose Solvers for Financial Text Analytics? A Study on Several Typical Tasks",
author = "Li, Xianzhi and
Chan, Samuel and
Zhu, Xiaodan and
Pei, Yulong and
Ma, Zhiqiang and
Liu, Xiaomo and
Shah, Sameena",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.39",
doi = "10.18653/v1/2023.emnlp-industry.39",
pages = "408--422",
abstract = "The most recent large language models (LLMs) such as ChatGPT and GPT-4 have shown exceptional capabilities of generalist models, achieving state-of-the-art performance on a wide range of NLP tasks with little or no adaptation. How effective are such models in the finance domain? Understanding this basic question would have a significant impact on many downstream financial analytical tasks. In this paper, we conduct empirical studies and provide experimental evidences of their performance on a wide variety of financial text analytical problems, using eight benchmark datasets from five categories of tasks. We report both the strengths and limitations of the current models by comparing them to the state-of-the-art fine-tuned approaches and the recently released domain-specific pretrained models. We hope our study can help to understand the capability of the existing models in the financial domain and facilitate further improvements.",
}
| The most recent large language models (LLMs) such as ChatGPT and GPT-4 have shown exceptional capabilities of generalist models, achieving state-of-the-art performance on a wide range of NLP tasks with little or no adaptation. How effective are such models in the finance domain? Understanding this basic question would have a significant impact on many downstream financial analytical tasks. In this paper, we conduct empirical studies and provide experimental evidences of their performance on a wide variety of financial text analytical problems, using eight benchmark datasets from five categories of tasks. We report both the strengths and limitations of the current models by comparing them to the state-of-the-art fine-tuned approaches and the recently released domain-specific pretrained models. We hope our study can help to understand the capability of the existing models in the financial domain and facilitate further improvements. | [
"Li, Xianzhi",
"Chan, Samuel",
"Zhu, Xiaodan",
"Pei, Yulong",
"Ma, Zhiqiang",
"Liu, Xiaomo",
"Shah, Sameena"
] | Are ChatGPT and GPT-4 General-Purpose Solvers for Financial Text Analytics? A Study on Several Typical Tasks | emnlp-industry.39 | 2305.05862 | [
""
] | https://huggingface.co/papers/2305.05862 | 3 | 4 | 1 | 5 | [] | [
"Aiera/finqa-verified"
] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.40.bib | https://aclanthology.org/2023.emnlp-industry.40/ | @inproceedings{sun-etal-2023-cl,
title = "{CL}-{QR}: Cross-Lingual Enhanced Query Reformulation for Multi-lingual Conversational {AI} Agents",
author = "Sun, Zhongkai and
Zhao, Zhengyang and
Lu, Sixing and
Ma, Chengyuan and
Liu, Xiaohu and
Fan, Xing and
Shen, Wei and
Guo, Chenlei",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.40",
doi = "10.18653/v1/2023.emnlp-industry.40",
pages = "423--431",
abstract = "The growing popularity of conversational AI agents such as Alexa, Google Assistant, and Siri rely on accurate spoken language comprehension. The query reformulation (QR) method, which reformulates defective user queries, has been broadly adopted to mitigate the challenges posed by understanding user{'}s intent from imperfect spoken recognition result. However, due to the scarcity of non-English QR labels, providing high-quality QR for non-English users still remains a challenge. This work proposes a novel cross-lingual QR framework, CL-QR, to leverage the abundant reformulation resources in English to improve non-English QR performance. The proposed work also proposes a Module-wise Mutually-supervised Feedback learning (MMF) algorithm to enable the continually self-improving of the CL-QR, which alleviates the lack of cross-lingual QR training data and enhances the delivery of high-quality reformulations learned in English for multilingual queries. Both offline evaluation and online A/B testing demonstrates the effectiveness of the proposed method.",
}
| The growing popularity of conversational AI agents such as Alexa, Google Assistant, and Siri rely on accurate spoken language comprehension. The query reformulation (QR) method, which reformulates defective user queries, has been broadly adopted to mitigate the challenges posed by understanding user{'}s intent from imperfect spoken recognition result. However, due to the scarcity of non-English QR labels, providing high-quality QR for non-English users still remains a challenge. This work proposes a novel cross-lingual QR framework, CL-QR, to leverage the abundant reformulation resources in English to improve non-English QR performance. The proposed work also proposes a Module-wise Mutually-supervised Feedback learning (MMF) algorithm to enable the continually self-improving of the CL-QR, which alleviates the lack of cross-lingual QR training data and enhances the delivery of high-quality reformulations learned in English for multilingual queries. Both offline evaluation and online A/B testing demonstrates the effectiveness of the proposed method. | [
"Sun, Zhongkai",
"Zhao, Zhengyang",
"Lu, Sixing",
"Ma, Chengyuan",
"Liu, Xiaohu",
"Fan, Xing",
"Shen, Wei",
"Guo, Chenlei"
] | CL-QR: Cross-Lingual Enhanced Query Reformulation for Multi-lingual Conversational AI Agents | emnlp-industry.40 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.41.bib | https://aclanthology.org/2023.emnlp-industry.41/ | @inproceedings{sun-etal-2023-improving,
title = "Improving Contextual Query Rewrite for Conversational {AI} Agents through User-preference Feedback Learning",
author = "Sun, Zhongkai and
Zhou, Yingxue and
Hao, Jie and
Fan, Xing and
Lu, Yanbin and
Ma, Chengyuan and
Shen, Wei and
Guo, Chenlei",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.41",
doi = "10.18653/v1/2023.emnlp-industry.41",
pages = "432--439",
abstract = "Contextual query rewriting (CQR) is a crucial component in Conversational AI agents, leveraging the contextual information from previous user-agent conversations to improve the comprehension of current user intent. However, traditional CQR methods often concentrate on supervised fine-tuning only, neglecting the opportunities to learn from user feedback to align with user preferences. Inspired by recent advances in learning from human feedback (LHF), this paper proposes a novel Preference Aligned Contextual Query Rewriting (PA-CQR) framework to enhance the CQR model{'}s capability in generating user preference-aligned rewrites. This paper also investigates the efficacy of various state-of-the-art feedback learning algorithms on the CQR task, and proposes a novel Dynamic Direct Preference Optimization (Dynamic DPO) algorithm to better adapt the DPO algorithm to large-scale CQR training. Experiments on large-scale real-world CQR data set demonstrate the superiority of the proposed PA-CQR framework and the Dynamic DPO.",
}
| Contextual query rewriting (CQR) is a crucial component in Conversational AI agents, leveraging the contextual information from previous user-agent conversations to improve the comprehension of current user intent. However, traditional CQR methods often concentrate on supervised fine-tuning only, neglecting the opportunities to learn from user feedback to align with user preferences. Inspired by recent advances in learning from human feedback (LHF), this paper proposes a novel Preference Aligned Contextual Query Rewriting (PA-CQR) framework to enhance the CQR model{'}s capability in generating user preference-aligned rewrites. This paper also investigates the efficacy of various state-of-the-art feedback learning algorithms on the CQR task, and proposes a novel Dynamic Direct Preference Optimization (Dynamic DPO) algorithm to better adapt the DPO algorithm to large-scale CQR training. Experiments on large-scale real-world CQR data set demonstrate the superiority of the proposed PA-CQR framework and the Dynamic DPO. | [
"Sun, Zhongkai",
"Zhou, Yingxue",
"Hao, Jie",
"Fan, Xing",
"Lu, Yanbin",
"Ma, Chengyuan",
"Shen, Wei",
"Guo, Chenlei"
] | Improving Contextual Query Rewrite for Conversational AI Agents through User-preference Feedback Learning | emnlp-industry.41 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.42.bib | https://aclanthology.org/2023.emnlp-industry.42/ | @inproceedings{singhal-etal-2023-scaling,
title = "Scaling Neural {ITN} for Numbers and Temporal Expressions in {T}amil: Findings for an Agglutinative Low-resource Language",
author = "Singhal, Bhavuk and
Gopalan, Sindhuja and
Krishna, Amrith and
Chetlur, Malolan",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.42",
doi = "10.18653/v1/2023.emnlp-industry.42",
pages = "440--450",
abstract = "ITN involves rewriting the verbalised form of text from spoken transcripts to its corresponding written form. The task inherently expects challenges in identifying ITN entries due to spelling variations in words arising out of dialects, transcription errors etc. Additionally, in Tamil, word boundaries between adjacent words in a sentence often get obscured due to Punarchi, i.e. phonetic transformation of these boundaries. Being morphologically rich, the words in Tamil show a high degree of agglutination due to inflection and clitics. The combination of such factors leads to a high degree of surface-form variations, making scalability with pure rule-based approaches difficult. Instead, we experiment with fine-tuning three pre-trained neural LMs, consisting of a seq2seq model (s2s), a non-autoregressive text editor (NAR) and a sequence tagger + rules combination (tagger). While the tagger approach works best in a fully-supervised setting, s2s performs the best (98.05 F-Score) when augmented with additional data, via bootstrapping and data augmentation (DA{\&}B). S2S reports a cumulative percentage improvement of 20.1 {\%}, and statistically significant gains for all our models with DA{\&}B. Compared to a fully supervised setup, bootstrapping alone reports a percentage improvement as high as 14.12 {\%}, even with a small seed set of 324 ITN entries.",
}
| ITN involves rewriting the verbalised form of text from spoken transcripts to its corresponding written form. The task inherently expects challenges in identifying ITN entries due to spelling variations in words arising out of dialects, transcription errors etc. Additionally, in Tamil, word boundaries between adjacent words in a sentence often get obscured due to Punarchi, i.e. phonetic transformation of these boundaries. Being morphologically rich, the words in Tamil show a high degree of agglutination due to inflection and clitics. The combination of such factors leads to a high degree of surface-form variations, making scalability with pure rule-based approaches difficult. Instead, we experiment with fine-tuning three pre-trained neural LMs, consisting of a seq2seq model (s2s), a non-autoregressive text editor (NAR) and a sequence tagger + rules combination (tagger). While the tagger approach works best in a fully-supervised setting, s2s performs the best (98.05 F-Score) when augmented with additional data, via bootstrapping and data augmentation (DA{\&}B). S2S reports a cumulative percentage improvement of 20.1 {\%}, and statistically significant gains for all our models with DA{\&}B. Compared to a fully supervised setup, bootstrapping alone reports a percentage improvement as high as 14.12 {\%}, even with a small seed set of 324 ITN entries. | [
"Singhal, Bhavuk",
"Gopalan, Sindhuja",
"Krishna, Amrith",
"Chetlur, Malolan"
] | Scaling Neural ITN for Numbers and Temporal Expressions in Tamil: Findings for an Agglutinative Low-resource Language | emnlp-industry.42 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.43.bib | https://aclanthology.org/2023.emnlp-industry.43/ | @inproceedings{cohn-etal-2023-eelbert,
title = "{EELBERT}: Tiny Models through Dynamic Embeddings",
author = "Cohn, Gabrielle and
Agarwal, Rishika and
Gupta, Deepanshu and
Patwardhan, Siddharth",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.43",
doi = "10.18653/v1/2023.emnlp-industry.43",
pages = "451--459",
abstract = "We introduce EELBERT, an approach for compression of transformer-based models (e.g., BERT), with minimal impact on the accuracy of downstream tasks. This is achieved by replacing the input embedding layer of the model with dynamic, i.e. on-the-fly, embedding computations. Since the input embedding layer occupies a large portion of the model size, especially for the smaller BERT variants, replacing this layer with an embedding computation function helps us reduce the model size significantly. Empirical evaluation on the GLUE benchmark shows that our BERT variants (EELBERT) suffer minimal regression compared to the traditional BERT models. Through this approach, we are able to develop our smallest model UNO-EELBERT, which achieves a GLUE score within 4{\%} of fully trained BERT-tiny, while being 15x smaller (1.2 MB) in size.",
}
| We introduce EELBERT, an approach for compression of transformer-based models (e.g., BERT), with minimal impact on the accuracy of downstream tasks. This is achieved by replacing the input embedding layer of the model with dynamic, i.e. on-the-fly, embedding computations. Since the input embedding layer occupies a large portion of the model size, especially for the smaller BERT variants, replacing this layer with an embedding computation function helps us reduce the model size significantly. Empirical evaluation on the GLUE benchmark shows that our BERT variants (EELBERT) suffer minimal regression compared to the traditional BERT models. Through this approach, we are able to develop our smallest model UNO-EELBERT, which achieves a GLUE score within 4{\%} of fully trained BERT-tiny, while being 15x smaller (1.2 MB) in size. | [
"Cohn, Gabrielle",
"Agarwal, Rishika",
"Gupta, Deepanshu",
"Patwardhan, Siddharth"
] | EELBERT: Tiny Models through Dynamic Embeddings | emnlp-industry.43 | 2310.20144 | [
""
] | https://huggingface.co/papers/2310.20144 | 0 | 3 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.44.bib | https://aclanthology.org/2023.emnlp-industry.44/ | @inproceedings{ali-etal-2023-gold,
title = "Gold Standard {B}angla {OCR} Dataset: An In-Depth Look at Data Preprocessing and Annotation Processes",
author = "Ali, Hasmot and
Rabby, AKM Shahariar Azad and
Islam, Md Majedul and
Mahamud, A.k.m and
Hasan, Nazmul and
Rahman, Fuad",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.44",
doi = "10.18653/v1/2023.emnlp-industry.44",
pages = "460--470",
abstract = "This research paper focuses on developing an improved Bangla Optical Character Recognition (OCR) system, addressing the challenges posed by the complexity of Bangla text structure, diverse handwriting styles, and the scarcity of comprehensive datasets. Leveraging recent advancements in Deep Learning and OCR techniques, we anticipate a significant enhancement in the performance of Bangla OCR by utilizing a large and diverse collection of labeled Bangla text image datasets. This study introduces the most extensive gold standard corpus for Bangla characters and words, comprising over 4 million human-annotated images. Our dataset encompasses various document types, such as Computer Compose, Letterpress, Typewriters, Outdoor Banner-Poster, and Handwritten documents, gathered from diverse sources. The entire corpus has undergone meticulous human annotation, employing a controlled annotation procedure consisting of three-step annotation and one-step validation, ensuring adherence to gold standard criteria. This paper provides a comprehensive overview of the complete data collection procedure. The ICT Division, Government of the People{'}s Republic of Bangladesh, will make the dataset publicly available, facilitating further research and development in Bangla OCR and related domains.",
}
| This research paper focuses on developing an improved Bangla Optical Character Recognition (OCR) system, addressing the challenges posed by the complexity of Bangla text structure, diverse handwriting styles, and the scarcity of comprehensive datasets. Leveraging recent advancements in Deep Learning and OCR techniques, we anticipate a significant enhancement in the performance of Bangla OCR by utilizing a large and diverse collection of labeled Bangla text image datasets. This study introduces the most extensive gold standard corpus for Bangla characters and words, comprising over 4 million human-annotated images. Our dataset encompasses various document types, such as Computer Compose, Letterpress, Typewriters, Outdoor Banner-Poster, and Handwritten documents, gathered from diverse sources. The entire corpus has undergone meticulous human annotation, employing a controlled annotation procedure consisting of three-step annotation and one-step validation, ensuring adherence to gold standard criteria. This paper provides a comprehensive overview of the complete data collection procedure. The ICT Division, Government of the People{'}s Republic of Bangladesh, will make the dataset publicly available, facilitating further research and development in Bangla OCR and related domains. | [
"Ali, Hasmot",
"Rabby, AKM Shahariar Azad",
"Islam, Md Majedul",
"Mahamud, A.k.m",
"Hasan, Nazmul",
"Rahman, Fuad"
] | Gold Standard Bangla OCR Dataset: An In-Depth Look at Data Preprocessing and Annotation Processes | emnlp-industry.44 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.45.bib | https://aclanthology.org/2023.emnlp-industry.45/ | @inproceedings{qi-etal-2023-pillow,
title = "{PILLOW}: Enhancing Efficient Instruction Fine-tuning via Prompt Matching",
author = "Qi, Zhenting and
Tan, Xiaoyu and
Shi, Shaojie and
Qu, Chao and
Xu, Yinghui and
Qi, Yuan",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.45",
doi = "10.18653/v1/2023.emnlp-industry.45",
pages = "471--482",
abstract = "Instruction fine-tuning has conventionally been employed to adapt Large Language Models (LLMs) to a variety of diverse tasks. Nonetheless, this technique often necessitates substantial computational resources, making it impractical for deployment by individuals or small-scale entities. Recently, Low-Rank Adaptation (LoRA) has become a promising alternative, offering tuning capabilities with reduced resource overhead. However, attaining satisfactory performance through the fine-tuning of LoRA is a non-trivial challenge. In this paper, we propose PILLOW, which aims to improve LoRA{'}s performance by leveraging LLM{'}s in-context learning capability through prompt matching via reinforcement learning in resource-constrained environments. Specifically, PILLOW incorporates a matching network that selects prompts from a user-defined pool, concatenates the optimal prompts given the user instruction, and performs inference using the LoRA-fine-tuned LLMs. Compared with typical instruction fine-tuning methods, PILLOW exhibits commensurate performance on various evaluation metrics, utilizing only consumer-grade GPU resources and exhibiting a large increase in training efficiency.",
}
| Instruction fine-tuning has conventionally been employed to adapt Large Language Models (LLMs) to a variety of diverse tasks. Nonetheless, this technique often necessitates substantial computational resources, making it impractical for deployment by individuals or small-scale entities. Recently, Low-Rank Adaptation (LoRA) has become a promising alternative, offering tuning capabilities with reduced resource overhead. However, attaining satisfactory performance through the fine-tuning of LoRA is a non-trivial challenge. In this paper, we propose PILLOW, which aims to improve LoRA{'}s performance by leveraging LLM{'}s in-context learning capability through prompt matching via reinforcement learning in resource-constrained environments. Specifically, PILLOW incorporates a matching network that selects prompts from a user-defined pool, concatenates the optimal prompts given the user instruction, and performs inference using the LoRA-fine-tuned LLMs. Compared with typical instruction fine-tuning methods, PILLOW exhibits commensurate performance on various evaluation metrics, utilizing only consumer-grade GPU resources and exhibiting a large increase in training efficiency. | [
"Qi, Zhenting",
"Tan, Xiaoyu",
"Shi, Shaojie",
"Qu, Chao",
"Xu, Yinghui",
"Qi, Yuan"
] | PILLOW: Enhancing Efficient Instruction Fine-tuning via Prompt Matching | emnlp-industry.45 | 2312.05621 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.46.bib | https://aclanthology.org/2023.emnlp-industry.46/ | @inproceedings{eden-etal-2023-welcome,
title = "Welcome to the Real World: Efficient, Incremental and Scalable Key Point Analysis",
author = "Eden, Lilach and
Kantor, Yoav and
Orbach, Matan and
Katz, Yoav and
Slonim, Noam and
Bar-Haim, Roy",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.46",
doi = "10.18653/v1/2023.emnlp-industry.46",
pages = "483--491",
abstract = "Key Point Analysis (KPA) is an emerging summarization framework, which extracts the main points from a collection of opinions, and quantifies their prevalence. It has been successfully applied to diverse types of data, including arguments, user reviews and survey responses. Despite the growing academic interest in KPA, little attention has been given to the practical challenges of implementing a KPA system in production. This work presents a deployed KPA system, which regularly serves multiple teams in our organization. We discuss the main challenges we faced while building a real-world KPA system, as well as the architecture and algorithmic improvements we developed to address these challenges. Specifically, we focus on efficient matching of sentences to key points, incremental processing, scalability and resiliency. The value of our contributions is demonstrated in an extensive set of experiments, over five existing and novel datasets. Finally, we describe several use cases of the deployed system, which illustrate its practical value.",
}
| Key Point Analysis (KPA) is an emerging summarization framework, which extracts the main points from a collection of opinions, and quantifies their prevalence. It has been successfully applied to diverse types of data, including arguments, user reviews and survey responses. Despite the growing academic interest in KPA, little attention has been given to the practical challenges of implementing a KPA system in production. This work presents a deployed KPA system, which regularly serves multiple teams in our organization. We discuss the main challenges we faced while building a real-world KPA system, as well as the architecture and algorithmic improvements we developed to address these challenges. Specifically, we focus on efficient matching of sentences to key points, incremental processing, scalability and resiliency. The value of our contributions is demonstrated in an extensive set of experiments, over five existing and novel datasets. Finally, we describe several use cases of the deployed system, which illustrate its practical value. | [
"Eden, Lilach",
"Kantor, Yoav",
"Orbach, Matan",
"Katz, Yoav",
"Slonim, Noam",
"Bar-Haim, Roy"
] | Welcome to the Real World: Efficient, Incremental and Scalable Key Point Analysis | emnlp-industry.46 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.47.bib | https://aclanthology.org/2023.emnlp-industry.47/ | @inproceedings{saadany-orasan-2023-automatic,
title = "Automatic Linking of Judgements to {UK} {S}upreme {C}ourt Hearings",
author = "Saadany, Hadeel and
Orasan, Constantin",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.47",
doi = "10.18653/v1/2023.emnlp-industry.47",
pages = "492--500",
abstract = "One the most important archived legal material in the UK is the Supreme Court published judgements and video recordings of court sittings for the decided cases. The impact of Supreme Court published material extends far beyond the parties involved in any given case as it provides landmark rulings on arguable points of law of the greatest public and constitutional importance. However, the recordings of a case are usually very long which makes it both time and effort consuming for legal professionals to study the critical arguments in the legal deliberations. In this research, we summarise the second part of a combined research-industrial project for building an automated tool designed specifically to link segments in the text judgement to semantically relevant timespans in the videos of the hearings. The tool is employed as a User-Interface (UI) platform that provides a better access to justice by bookmarking the timespans in the videos which contributed to the final judgement of the case. We explain how we employ AI generative technology to retrieve the relevant links and show that the customisation of the GPT text embeddings to our dataset achieves the best accuracy for our automatic linking system.",
}
| One the most important archived legal material in the UK is the Supreme Court published judgements and video recordings of court sittings for the decided cases. The impact of Supreme Court published material extends far beyond the parties involved in any given case as it provides landmark rulings on arguable points of law of the greatest public and constitutional importance. However, the recordings of a case are usually very long which makes it both time and effort consuming for legal professionals to study the critical arguments in the legal deliberations. In this research, we summarise the second part of a combined research-industrial project for building an automated tool designed specifically to link segments in the text judgement to semantically relevant timespans in the videos of the hearings. The tool is employed as a User-Interface (UI) platform that provides a better access to justice by bookmarking the timespans in the videos which contributed to the final judgement of the case. We explain how we employ AI generative technology to retrieve the relevant links and show that the customisation of the GPT text embeddings to our dataset achieves the best accuracy for our automatic linking system. | [
"Saadany, Hadeel",
"Orasan, Constantin"
] | Automatic Linking of Judgements to UK Supreme Court Hearings | emnlp-industry.47 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.48.bib | https://aclanthology.org/2023.emnlp-industry.48/ | @inproceedings{wang-etal-2023-automatic,
title = "Automatic Marketing Theme and Commodity Construction System for {E}-commerce",
author = "Wang, Zhiping and
Lin, Peng and
Zhang, Hainan and
Chen, Hongshen and
Li, Tianhao and
Ding, Zhuoye and
Xu, Sulong and
Hu, Jinghe",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.48",
doi = "10.18653/v1/2023.emnlp-industry.48",
pages = "501--508",
abstract = "When consumers{'} shopping needs are concentrated, they are more interested in the collection of commodities under the specific marketing theme. Therefore, mining marketing themes and their commodities collections can help customers save shopping costs and improve user clicks and purchases for recommendation system. However, the current system invites experts to write marketing themes and select the relevant commodities, which suffer from difficulty in mass production, poor timeliness and low online indicators. Therefore, we propose a automatic marketing theme and commodity construction system, which can not only generate popular marketing themes and select the relevant commodities automatically, but also improve the theme online effectiveness in the recommendation system. Specifically, we firstly utilize the pretrained language model to generate the marketing themes. And then, we utilize the theme-commodity consistency module to select the relevant commodities for the above generative theme. What{'}s more, we also build the indicator simulator to evaluate the effectiveness of the above generative theme. When the indicator is lower, the above selective commodities will be input into the theme-rewriter module to generate more efficient marketing themes. Finally, we utilize the human screening to control the system quality. Both the offline experiments and online A/B test demonstrate the superior performance of our proposed system compared with state-of-the-art methods.",
}
| When consumers{'} shopping needs are concentrated, they are more interested in the collection of commodities under the specific marketing theme. Therefore, mining marketing themes and their commodities collections can help customers save shopping costs and improve user clicks and purchases for recommendation system. However, the current system invites experts to write marketing themes and select the relevant commodities, which suffer from difficulty in mass production, poor timeliness and low online indicators. Therefore, we propose a automatic marketing theme and commodity construction system, which can not only generate popular marketing themes and select the relevant commodities automatically, but also improve the theme online effectiveness in the recommendation system. Specifically, we firstly utilize the pretrained language model to generate the marketing themes. And then, we utilize the theme-commodity consistency module to select the relevant commodities for the above generative theme. What{'}s more, we also build the indicator simulator to evaluate the effectiveness of the above generative theme. When the indicator is lower, the above selective commodities will be input into the theme-rewriter module to generate more efficient marketing themes. Finally, we utilize the human screening to control the system quality. Both the offline experiments and online A/B test demonstrate the superior performance of our proposed system compared with state-of-the-art methods. | [
"Wang, Zhiping",
"Lin, Peng",
"Zhang, Hainan",
"Chen, Hongshen",
"Li, Tianhao",
"Ding, Zhuoye",
"Xu, Sulong",
"Hu, Jinghe"
] | Automatic Marketing Theme and Commodity Construction System for E-commerce | emnlp-industry.48 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.49.bib | https://aclanthology.org/2023.emnlp-industry.49/ | @inproceedings{inoue-etal-2023-towards,
title = "Towards Safer Operations: An Expert-involved Dataset of High-Pressure Gas Incidents for Preventing Future Failures",
author = "Inoue, Shumpei and
Nguyen, Minh-Tien and
Mizokuchi, Hiroki and
Nguyen, Tuan-Anh and
Nguyen, Huu-Hiep and
Le, Dung",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.49",
doi = "10.18653/v1/2023.emnlp-industry.49",
pages = "509--521",
abstract = "This paper introduces a new IncidentAI dataset for safety prevention. Different from prior corpora that usually contain a single task, our dataset comprises three tasks: named entity recognition, cause-effect extraction, and information retrieval. The dataset is annotated by domain experts who have at least six years of practical experience as high-pressure gas conservation managers. We validate the contribution of the dataset in the scenario of safety prevention. Preliminary results on the three tasks show that NLP techniques are beneficial for analyzing incident reports to prevent future failures. The dataset facilitates future research in NLP and incident management communities. The access to the dataset is also provided (The IncidentAI dataset is available at: https://github.com/Cinnamon/incident-ai-dataset).",
}
| This paper introduces a new IncidentAI dataset for safety prevention. Different from prior corpora that usually contain a single task, our dataset comprises three tasks: named entity recognition, cause-effect extraction, and information retrieval. The dataset is annotated by domain experts who have at least six years of practical experience as high-pressure gas conservation managers. We validate the contribution of the dataset in the scenario of safety prevention. Preliminary results on the three tasks show that NLP techniques are beneficial for analyzing incident reports to prevent future failures. The dataset facilitates future research in NLP and incident management communities. The access to the dataset is also provided (The IncidentAI dataset is available at: https://github.com/Cinnamon/incident-ai-dataset). | [
"Inoue, Shumpei",
"Nguyen, Minh-Tien",
"Mizokuchi, Hiroki",
"Nguyen, Tuan-Anh",
"Nguyen, Huu-Hiep",
"Le, Dung"
] | Towards Safer Operations: An Expert-involved Dataset of High-Pressure Gas Incidents for Preventing Future Failures | emnlp-industry.49 | 2310.12074 | [
""
] | https://huggingface.co/papers/2310.12074 | 0 | 0 | 0 | 6 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.50.bib | https://aclanthology.org/2023.emnlp-industry.50/ | @inproceedings{yao-etal-2023-auxiliary,
title = "An Auxiliary Task Boosted Multi-task Learning Method for Service Account Retrieval with Limited Human Annotation",
author = "Yao, Yuanzhou and
Zhang, Zhao and
Yang, Kaijia and
Liang, Huasheng and
Yan, Qiang and
Xu, Yongjun",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.50",
doi = "10.18653/v1/2023.emnlp-industry.50",
pages = "522--531",
abstract = "Service accounts, including organizations{'} official accounts and mini-programs, provide various convenient services for users, and have become crucial components of a number of applications. Therefore, retrieving service accounts quickly and accurately is vital. However, this task suffers from the problem of limited human annotation, i.e., manually assessing account functionality and assigning ratings based on user experience is both labor-intensive and time-consuming. To this end, this paper proposes a novel approach, the Auxiliary task Boosted Multi-Task Learning method (AuxBoost-MTL). Specifically, the proposed method introduces multiple auxiliary tasks, which is able to utilized the log data from our application as supervision, and enhance the performance of the main task, service account retrieval. Furthermore, we introduce an Adaptive Hierarchical Fusion Module (AHF module) into our approach. This module is designed to adaptively perform hierarchical fusion of embeddings from auxiliary tasks into the main task, thereby enhancing the model efficacy. Experiments on two real-world industrial datasets demonstrate the effectiveness of our proposed approach.",
}
| Service accounts, including organizations{'} official accounts and mini-programs, provide various convenient services for users, and have become crucial components of a number of applications. Therefore, retrieving service accounts quickly and accurately is vital. However, this task suffers from the problem of limited human annotation, i.e., manually assessing account functionality and assigning ratings based on user experience is both labor-intensive and time-consuming. To this end, this paper proposes a novel approach, the Auxiliary task Boosted Multi-Task Learning method (AuxBoost-MTL). Specifically, the proposed method introduces multiple auxiliary tasks, which is able to utilized the log data from our application as supervision, and enhance the performance of the main task, service account retrieval. Furthermore, we introduce an Adaptive Hierarchical Fusion Module (AHF module) into our approach. This module is designed to adaptively perform hierarchical fusion of embeddings from auxiliary tasks into the main task, thereby enhancing the model efficacy. Experiments on two real-world industrial datasets demonstrate the effectiveness of our proposed approach. | [
"Yao, Yuanzhou",
"Zhang, Zhao",
"Yang, Kaijia",
"Liang, Huasheng",
"Yan, Qiang",
"Xu, Yongjun"
] | An Auxiliary Task Boosted Multi-task Learning Method for Service Account Retrieval with Limited Human Annotation | emnlp-industry.50 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.51.bib | https://aclanthology.org/2023.emnlp-industry.51/ | @inproceedings{an-etal-2023-vkie,
title = "{VKIE}: The Application of Key Information Extraction on Video Text",
author = "An, Siyu and
Liu, Ye and
Peng, Haoyuan and
Yin, Di",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.51",
doi = "10.18653/v1/2023.emnlp-industry.51",
pages = "532--540",
abstract = "Extracting structured information from videos is critical for numerous downstream applications in the industry. In this paper, we define a significant task of extracting hierarchical key information from visual texts on videos. To fulfill this task, we decouple it into four subtasks and introduce two implementation solutions called PipVKIE and UniVKIE. PipVKIE sequentially completes the four subtasks in continuous stages, while UniVKIE is improved by unifying all the subtasks into one backbone. Both PipVKIE and UniVKIE leverage multimodal information from vision, text, and coordinates for feature representation. Extensive experiments on one well-defined dataset demonstrate that our solutions can achieve remarkable performance and efficient inference speed.",
}
| Extracting structured information from videos is critical for numerous downstream applications in the industry. In this paper, we define a significant task of extracting hierarchical key information from visual texts on videos. To fulfill this task, we decouple it into four subtasks and introduce two implementation solutions called PipVKIE and UniVKIE. PipVKIE sequentially completes the four subtasks in continuous stages, while UniVKIE is improved by unifying all the subtasks into one backbone. Both PipVKIE and UniVKIE leverage multimodal information from vision, text, and coordinates for feature representation. Extensive experiments on one well-defined dataset demonstrate that our solutions can achieve remarkable performance and efficient inference speed. | [
"An, Siyu",
"Liu, Ye",
"Peng, Haoyuan",
"Yin, Di"
] | VKIE: The Application of Key Information Extraction on Video Text | emnlp-industry.51 | 2310.11650 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.52.bib | https://aclanthology.org/2023.emnlp-industry.52/ | @inproceedings{nathan-etal-2023-investigating,
title = "Investigating the Role and Impact of Disfluency on Summarization",
author = "Nathan, Varun and
Kumar, Ayush and
Vepa, Jithendra",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.52",
doi = "10.18653/v1/2023.emnlp-industry.52",
pages = "541--551",
abstract = "Contact centers handle both chat and voice calls for the same domain. As part of their workflow, it is a standard practice to summarize the conversations once they conclude. A significant distinction between chat and voice communication lies in the presence of disfluencies in voice calls, such as repetitions, restarts, and replacements. These disfluencies are generally considered noise for downstream natural language understanding (NLU) tasks. While a separate summarization model for voice calls can be trained in addition to chat specific model for the same domain, it requires manual annotations for both the channels and adds complexity arising due to maintaining two models. Therefore, it{'}s crucial to investigate if a model trained on fluent data can handle disfluent data effectively. While previous research explored impact of disfluency on question-answering and intent detection, its influence on summarization is inadequately studied. Our experiments reveal up to 6.99-point degradation in Rouge-L score, along with reduced fluency, consistency, and relevance when a fluent-trained model handles disfluent data. Replacement disfluencies have the highest negative impact. To mitigate this, we examine Fused-Fine Tuning by training the model with a combination of fluent and disfluent data, resulting in improved performance on both public and real-life datasets. Our work highlights the significance of incorporating disfluency in training summarization models and its advantages in an industrial setting.",
}
| Contact centers handle both chat and voice calls for the same domain. As part of their workflow, it is a standard practice to summarize the conversations once they conclude. A significant distinction between chat and voice communication lies in the presence of disfluencies in voice calls, such as repetitions, restarts, and replacements. These disfluencies are generally considered noise for downstream natural language understanding (NLU) tasks. While a separate summarization model for voice calls can be trained in addition to chat specific model for the same domain, it requires manual annotations for both the channels and adds complexity arising due to maintaining two models. Therefore, it{'}s crucial to investigate if a model trained on fluent data can handle disfluent data effectively. While previous research explored impact of disfluency on question-answering and intent detection, its influence on summarization is inadequately studied. Our experiments reveal up to 6.99-point degradation in Rouge-L score, along with reduced fluency, consistency, and relevance when a fluent-trained model handles disfluent data. Replacement disfluencies have the highest negative impact. To mitigate this, we examine Fused-Fine Tuning by training the model with a combination of fluent and disfluent data, resulting in improved performance on both public and real-life datasets. Our work highlights the significance of incorporating disfluency in training summarization models and its advantages in an industrial setting. | [
"Nathan, Varun",
"Kumar, Ayush",
"Vepa, Jithendra"
] | Investigating the Role and Impact of Disfluency on Summarization | emnlp-industry.52 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.53.bib | https://aclanthology.org/2023.emnlp-industry.53/ | @inproceedings{mukku-etal-2023-insightnet,
title = "{I}nsight{N}et : Structured Insight Mining from Customer Feedback",
author = "Mukku, Sandeep Sricharan and
Soni, Manan and
Aggarwal, Chetan and
Rana, Jitenkumar and
Yenigalla, Promod and
Patange, Rashmi and
Mohan, Shyam",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.53",
doi = "10.18653/v1/2023.emnlp-industry.53",
pages = "552--566",
abstract = "We propose InsightNet, a novel approach for the automated extraction of structured insights from customer reviews. Our end-to-end machine learning framework is designed to overcome the limitations of current solutions, including the absence of structure for identified topics, non-standard aspect names, and lack of abundant training data. The proposed solution builds a semi-supervised multi-level taxonomy from raw reviews, a semantic similarity heuristic approach to generate labelled data and employs a multi-task insight extraction architecture by fine-tuning an LLM. InsightNet identifies granular actionable topics with customer sentiments and verbatim for each topic. Evaluations on real-world customer review data show that InsightNet performs better than existing solutions in terms of structure, hierarchy and completeness. We empirically demonstrate that InsightNet outperforms the current state-of-the-art methods in multi-label topic classification, achieving an F1 score of 0.85, which is an improvement of 11{\%} F1-score over the previous best results. Additionally, InsightNet generalises well for unseen aspects and suggests new topics to be added to the taxonomy.",
}
| We propose InsightNet, a novel approach for the automated extraction of structured insights from customer reviews. Our end-to-end machine learning framework is designed to overcome the limitations of current solutions, including the absence of structure for identified topics, non-standard aspect names, and lack of abundant training data. The proposed solution builds a semi-supervised multi-level taxonomy from raw reviews, a semantic similarity heuristic approach to generate labelled data and employs a multi-task insight extraction architecture by fine-tuning an LLM. InsightNet identifies granular actionable topics with customer sentiments and verbatim for each topic. Evaluations on real-world customer review data show that InsightNet performs better than existing solutions in terms of structure, hierarchy and completeness. We empirically demonstrate that InsightNet outperforms the current state-of-the-art methods in multi-label topic classification, achieving an F1 score of 0.85, which is an improvement of 11{\%} F1-score over the previous best results. Additionally, InsightNet generalises well for unseen aspects and suggests new topics to be added to the taxonomy. | [
"Mukku, S",
"eep Sricharan",
"Soni, Manan",
"Aggarwal, Chetan",
"Rana, Jitenkumar",
"Yenigalla, Promod",
"Patange, Rashmi",
"Mohan, Shyam"
] | InsightNet : Structured Insight Mining from Customer Feedback | emnlp-industry.53 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.54.bib | https://aclanthology.org/2023.emnlp-industry.54/ | @inproceedings{singla-etal-2023-e2e,
title = "{E}2{E} Spoken Entity Extraction for Virtual Agents",
author = "Singla, Karan and
Kim, Yeon-Jun and
Bangalore, Srinivas",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.54",
doi = "10.18653/v1/2023.emnlp-industry.54",
pages = "567--574",
abstract = "In human-computer conversations, extracting entities such as names, street addresses and email addresses from speech is a challenging task. In this paper, we study the impact of fine-tuning pre-trained speech encoders on extracting spoken entities in human-readable form directly from speech without the need for text transcription. We illustrate that such a direct approach optimizes the encoder to transcribe only the entity relevant portions of speech ignoring the superfluous portions such as carrier phrases, or spell name entities. In the context of dialog from an enterprise virtual agent, we demonstrate that the 1-step approach outperforms the typical 2-step approach which first generates lexical transcriptions followed by text-based entity extraction for identifying spoken entities.",
}
| In human-computer conversations, extracting entities such as names, street addresses and email addresses from speech is a challenging task. In this paper, we study the impact of fine-tuning pre-trained speech encoders on extracting spoken entities in human-readable form directly from speech without the need for text transcription. We illustrate that such a direct approach optimizes the encoder to transcribe only the entity relevant portions of speech ignoring the superfluous portions such as carrier phrases, or spell name entities. In the context of dialog from an enterprise virtual agent, we demonstrate that the 1-step approach outperforms the typical 2-step approach which first generates lexical transcriptions followed by text-based entity extraction for identifying spoken entities. | [
"Singla, Karan",
"Kim, Yeon-Jun",
"Bangalore, Srinivas"
] | E2E Spoken Entity Extraction for Virtual Agents | emnlp-industry.54 | 2302.10186 | [
""
] | https://huggingface.co/papers/2302.10186 | 1 | 1 | 0 | 3 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.55.bib | https://aclanthology.org/2023.emnlp-industry.55/ | @inproceedings{blume-etal-2023-generative,
title = "Generative Models for Product Attribute Extraction",
author = "Blume, Ansel and
Zalmout, Nasser and
Ji, Heng and
Li, Xian",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.55",
doi = "10.18653/v1/2023.emnlp-industry.55",
pages = "575--585",
abstract = "Product attribute extraction is an emerging field in information extraction and e-commerce, with applications including knowledge base construction, product recommendation, and enhancing customer experiences. In this work, we explore the use of generative models for product attribute extraction. We analyze their utility with hard and soft prompting methods, and demonstrate their ability to generate implicit attribute values, which state-of-the-art sequence tagging models are unable to extract. We perform a wide range of experiments on Amazon and MAVE product attribute datasets, and are the first to present results on multilingual attribute extraction. Our results show that generative models can outperform state- of-the-art tagging models for explicit product attribute extraction while having greater data efficiency, that they have the unique ability to perform implicit attribute extraction, and that in certain settings large language models can perform competitively with finetuned models with as little as two in-context examples.",
}
| Product attribute extraction is an emerging field in information extraction and e-commerce, with applications including knowledge base construction, product recommendation, and enhancing customer experiences. In this work, we explore the use of generative models for product attribute extraction. We analyze their utility with hard and soft prompting methods, and demonstrate their ability to generate implicit attribute values, which state-of-the-art sequence tagging models are unable to extract. We perform a wide range of experiments on Amazon and MAVE product attribute datasets, and are the first to present results on multilingual attribute extraction. Our results show that generative models can outperform state- of-the-art tagging models for explicit product attribute extraction while having greater data efficiency, that they have the unique ability to perform implicit attribute extraction, and that in certain settings large language models can perform competitively with finetuned models with as little as two in-context examples. | [
"Blume, Ansel",
"Zalmout, Nasser",
"Ji, Heng",
"Li, Xian"
] | Generative Models for Product Attribute Extraction | emnlp-industry.55 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.56.bib | https://aclanthology.org/2023.emnlp-industry.56/ | @inproceedings{rony-etal-2023-carexpert,
title = "{C}ar{E}xpert: Leveraging Large Language Models for In-Car Conversational Question Answering",
author = "Rony, Md Rashad Al Hasan and
Suess, Christian and
Bhat, Sinchana Ramakanth and
Sudhi, Viju and
Schneider, Julia and
Vogel, Maximilian and
Teucher, Roman and
Friedl, Ken and
Sahoo, Soumya",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.56",
doi = "10.18653/v1/2023.emnlp-industry.56",
pages = "586--604",
abstract = "Large language models (LLMs) have demonstrated remarkable performance by following natural language instructions without fine-tuning them on domain-specific tasks and data. However, leveraging LLMs for domain-specific question answering suffers from severe limitations. The generated answer tends to hallucinate due to the training data collection time (when using off-the-shelf), complex user utterance and wrong retrieval (in retrieval-augmented generation). Furthermore, due to the lack of awareness about the domain and expected output, such LLMs may generate unexpected and unsafe answers that are not tailored to the target domain. In this paper, we propose CarExpert, an in-car retrieval-augmented conversational question-answering system leveraging LLMs for different tasks. Specifically, CarExpert employs LLMs to control the input, provide domain-specific documents to the extractive and generative answering components, and controls the output to ensure safe and domain-specific answers. A comprehensive empirical evaluation exhibits that CarExpert outperforms state-of-the-art LLMs in generating natural, safe and car-specific answers.",
}
| Large language models (LLMs) have demonstrated remarkable performance by following natural language instructions without fine-tuning them on domain-specific tasks and data. However, leveraging LLMs for domain-specific question answering suffers from severe limitations. The generated answer tends to hallucinate due to the training data collection time (when using off-the-shelf), complex user utterance and wrong retrieval (in retrieval-augmented generation). Furthermore, due to the lack of awareness about the domain and expected output, such LLMs may generate unexpected and unsafe answers that are not tailored to the target domain. In this paper, we propose CarExpert, an in-car retrieval-augmented conversational question-answering system leveraging LLMs for different tasks. Specifically, CarExpert employs LLMs to control the input, provide domain-specific documents to the extractive and generative answering components, and controls the output to ensure safe and domain-specific answers. A comprehensive empirical evaluation exhibits that CarExpert outperforms state-of-the-art LLMs in generating natural, safe and car-specific answers. | [
"Rony, Md Rashad Al Hasan",
"Suess, Christian",
"Bhat, Sinchana Ramakanth",
"Sudhi, Viju",
"Schneider, Julia",
"Vogel, Maximilian",
"Teucher, Roman",
"Friedl, Ken",
"Sahoo, Soumya"
] | CarExpert: Leveraging Large Language Models for In-Car Conversational Question Answering | emnlp-industry.56 | 2310.09536 | [
""
] | https://huggingface.co/papers/2310.09536 | 2 | 0 | 0 | 9 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.57.bib | https://aclanthology.org/2023.emnlp-industry.57/ | @inproceedings{zugarini-etal-2023-buster,
title = "{BUSTER}: a {``}{BUS}iness Transaction Entity Recognition{''} dataset",
author = "Zugarini, Andrea and
Zamai, Andrew and
Ernandes, Marco and
Rigutini, Leonardo",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.57",
doi = "10.18653/v1/2023.emnlp-industry.57",
pages = "605--611",
abstract = "Albeit Natural Language Processing has seen major breakthroughs in the last few years, transferring such advances into real-world business cases can be challenging. One of the reasons resides in the displacement between popular benchmarks and actual data. Lack of supervision, unbalanced classes, noisy data and long documents often affect real problems in vertical domains such as finance, law and health. To support industry-oriented research, we present BUSTER, a BUSiness Transaction Entity Recognition dataset. The dataset consists of 3779 manually annotated documents on financial transactions. We establish several baselines exploiting both general-purpose and domain-specific language models. The best performing model is also used to automatically annotate 6196 documents, which we release as an additional silver corpus to BUSTER.",
}
| Albeit Natural Language Processing has seen major breakthroughs in the last few years, transferring such advances into real-world business cases can be challenging. One of the reasons resides in the displacement between popular benchmarks and actual data. Lack of supervision, unbalanced classes, noisy data and long documents often affect real problems in vertical domains such as finance, law and health. To support industry-oriented research, we present BUSTER, a BUSiness Transaction Entity Recognition dataset. The dataset consists of 3779 manually annotated documents on financial transactions. We establish several baselines exploiting both general-purpose and domain-specific language models. The best performing model is also used to automatically annotate 6196 documents, which we release as an additional silver corpus to BUSTER. | [
"Zugarini, Andrea",
"Zamai, Andrew",
"Ern",
"es, Marco",
"Rigutini, Leonardo"
] | BUSTER: a “BUSiness Transaction Entity Recognition” dataset | emnlp-industry.57 | [
""
] | https://huggingface.co/papers/2402.09916 | 1 | 0 | 0 | 4 | [] | [
"expertai/BUSTER"
] | [] | 1 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.58.bib | https://aclanthology.org/2023.emnlp-industry.58/ | @inproceedings{gee-etal-2023-multi,
title = "Multi-word Tokenization for Sequence Compression",
author = "Gee, Leonidas and
Rigutini, Leonardo and
Ernandes, Marco and
Zugarini, Andrea",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.58",
doi = "10.18653/v1/2023.emnlp-industry.58",
pages = "612--621",
abstract = "Large Language Models have proven highly successful at modelling a variety of tasks. However, this comes at a steep computational cost that hinders wider industrial uptake. In this paper, we present MWT: a Multi-Word Tokenizer that goes beyond word boundaries by representing frequent multi-word expressions as single tokens. MWTs produce a more compact and efficient tokenization that yields two benefits: (1) Increase in performance due to a greater coverage of input data given a fixed sequence length budget; (2) Faster and lighter inference due to the ability to reduce the sequence length with negligible drops in performance. Our results show that MWT is more robust across shorter sequence lengths, thus allowing for major speedups via early sequence truncation.",
}
| Large Language Models have proven highly successful at modelling a variety of tasks. However, this comes at a steep computational cost that hinders wider industrial uptake. In this paper, we present MWT: a Multi-Word Tokenizer that goes beyond word boundaries by representing frequent multi-word expressions as single tokens. MWTs produce a more compact and efficient tokenization that yields two benefits: (1) Increase in performance due to a greater coverage of input data given a fixed sequence length budget; (2) Faster and lighter inference due to the ability to reduce the sequence length with negligible drops in performance. Our results show that MWT is more robust across shorter sequence lengths, thus allowing for major speedups via early sequence truncation. | [
"Gee, Leonidas",
"Rigutini, Leonardo",
"Ern",
"es, Marco",
"Zugarini, Andrea"
] | Multi-word Tokenization for Sequence Compression | emnlp-industry.58 | 2402.09949 | [
"https://github.com/leonidasy/fast-vocabulary-transfer"
] | https://huggingface.co/papers/2402.09949 | 1 | 0 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.59.bib | https://aclanthology.org/2023.emnlp-industry.59/ | @inproceedings{liu-etal-2023-jarvix,
title = "{J}arvi{X}: A {LLM} No code Platform for Tabular Data Analysis and Optimization",
author = "Liu, Shang-Ching and
Wang, ShengKun and
Chang, Tsungyao and
Lin, Wenqi and
Hsiung, Chung-Wei and
Hsieh, Yi-Chen and
Cheng, Yu-Ping and
Luo, Sian-Hong and
Zhang, Jianwei",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.59",
doi = "10.18653/v1/2023.emnlp-industry.59",
pages = "622--630",
abstract = "In this study, we introduce JarviX, a sophisticated data analytics framework. JarviX is designed to employ Large Language Models (LLMs) to facilitate an automated guide and execute high-precision data analyzes on tabular datasets. This framework emphasizes the significance of varying column types, capitalizing on state-of-the-art LLMs to generate concise data insight summaries, propose relevant analysis inquiries, visualize data effectively, and provide comprehensive explanations for results drawn from an extensive data analysis pipeline. Moreover, JarviX incorporates an automated machine learning (AutoML) pipeline for predictive modeling. This integration forms a comprehensive and automated optimization cycle, which proves particularly advantageous for optimizing machine configuration. The efficacy and adaptability of JarviX are substantiated through a series of practical use case studies.",
}
| In this study, we introduce JarviX, a sophisticated data analytics framework. JarviX is designed to employ Large Language Models (LLMs) to facilitate an automated guide and execute high-precision data analyzes on tabular datasets. This framework emphasizes the significance of varying column types, capitalizing on state-of-the-art LLMs to generate concise data insight summaries, propose relevant analysis inquiries, visualize data effectively, and provide comprehensive explanations for results drawn from an extensive data analysis pipeline. Moreover, JarviX incorporates an automated machine learning (AutoML) pipeline for predictive modeling. This integration forms a comprehensive and automated optimization cycle, which proves particularly advantageous for optimizing machine configuration. The efficacy and adaptability of JarviX are substantiated through a series of practical use case studies. | [
"Liu, Shang-Ching",
"Wang, ShengKun",
"Chang, Tsungyao",
"Lin, Wenqi",
"Hsiung, Chung-Wei",
"Hsieh, Yi-Chen",
"Cheng, Yu-Ping",
"Luo, Sian-Hong",
"Zhang, Jianwei"
] | JarviX: A LLM No code Platform for Tabular Data Analysis and Optimization | emnlp-industry.59 | 2312.02213 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.60.bib | https://aclanthology.org/2023.emnlp-industry.60/ | @inproceedings{jayanthi-etal-2023-retrieve,
title = "Retrieve and Copy: Scaling {ASR} Personalization to Large Catalogs",
author = "Jayanthi, Sai Muralidhar and
Kulshreshtha, Devang and
Dingliwal, Saket and
Ronanki, Srikanth and
Bodapati, Sravan",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.60",
doi = "10.18653/v1/2023.emnlp-industry.60",
pages = "631--639",
abstract = "Personalization of automatic speech recognition (ASR) models is a widely studied topic because of its many practical applications. Most recently, attention-based contextual biasing techniques are used to improve the recognition of rare words and/or domain specific entities. However, due to performance constraints, the biasing is often limited to a few thousand entities, restricting real-world usability. To address this, we first propose a {``}Retrieve and Copy{''} mechanism to improve latency while retaining the accuracy even when scaled to a large catalog. We also propose a training strategy to overcome the degradation in recall at such scale due to an increased number of confusing entities. Overall, our approach achieves up to 6{\%} more Word Error Rate reduction (WERR) and 3.6{\%} absolute improvement in F1 when compared to a strong baseline. Our method also allows for large catalog sizes of up to 20K without significantly affecting WER and F1-scores, while achieving at least 20{\%} inference speedup per acoustic frame.",
}
| Personalization of automatic speech recognition (ASR) models is a widely studied topic because of its many practical applications. Most recently, attention-based contextual biasing techniques are used to improve the recognition of rare words and/or domain specific entities. However, due to performance constraints, the biasing is often limited to a few thousand entities, restricting real-world usability. To address this, we first propose a {``}Retrieve and Copy{''} mechanism to improve latency while retaining the accuracy even when scaled to a large catalog. We also propose a training strategy to overcome the degradation in recall at such scale due to an increased number of confusing entities. Overall, our approach achieves up to 6{\%} more Word Error Rate reduction (WERR) and 3.6{\%} absolute improvement in F1 when compared to a strong baseline. Our method also allows for large catalog sizes of up to 20K without significantly affecting WER and F1-scores, while achieving at least 20{\%} inference speedup per acoustic frame. | [
"Jayanthi, Sai Muralidhar",
"Kulshreshtha, Devang",
"Dingliwal, Saket",
"Ronanki, Srikanth",
"Bodapati, Sravan"
] | Retrieve and Copy: Scaling ASR Personalization to Large Catalogs | emnlp-industry.60 | 2311.08402 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.61.bib | https://aclanthology.org/2023.emnlp-industry.61/ | @inproceedings{zhang-etal-2023-steer,
title = "{STEER}: Semantic Turn Extension-Expansion Recognition for Voice Assistants",
author = "Zhang, Leon and
Lu, Jiarui and
Moniz, Joel Ruben Antony and
Kulkarni, Aditya and
Piraviperumal, Dhivya and
Tran, Tien Dung and
Tzou, Nick and
Yu, Hong",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.61",
doi = "10.18653/v1/2023.emnlp-industry.61",
pages = "640--649",
abstract = "In the context of a voice assistant system, steering refers to the phenomenon in which a user issues a follow-up command attempting to direct or clarify a previous turn. We propose STEER, a steering detection model that predicts whether a follow-up turn is a user{'}s attempt to steer the previous command. Constructing a training dataset for steering use cases poses challenges due to the cold-start problem. To overcome this, we developed heuristic rules to sample opt-in usage data, approximating positive and negative samples without any annotation. Our experimental results show promising performance in identifying steering intent, with over 95{\%} accuracy on our sampled data. Moreover, STEER, in conjunction with our sampling strategy, aligns effectively with real-world steering scenarios, as evidenced by its strong zero-shot performance on a human-graded evaluation set. In addition to relying solely on user transcripts as input, we introduce STEER+, an enhanced version of the model. STEER+ utilizes a semantic parse tree to provide more context on out-of-vocabulary words, such as named entities that often occur at the sentence boundary. This further improves model performance, reducing error rate in domains where entities frequently appear, such as messaging. Lastly, we present a data analysis that highlights the improvement in user experience when voice assistants support steering use cases.",
}
| In the context of a voice assistant system, steering refers to the phenomenon in which a user issues a follow-up command attempting to direct or clarify a previous turn. We propose STEER, a steering detection model that predicts whether a follow-up turn is a user{'}s attempt to steer the previous command. Constructing a training dataset for steering use cases poses challenges due to the cold-start problem. To overcome this, we developed heuristic rules to sample opt-in usage data, approximating positive and negative samples without any annotation. Our experimental results show promising performance in identifying steering intent, with over 95{\%} accuracy on our sampled data. Moreover, STEER, in conjunction with our sampling strategy, aligns effectively with real-world steering scenarios, as evidenced by its strong zero-shot performance on a human-graded evaluation set. In addition to relying solely on user transcripts as input, we introduce STEER+, an enhanced version of the model. STEER+ utilizes a semantic parse tree to provide more context on out-of-vocabulary words, such as named entities that often occur at the sentence boundary. This further improves model performance, reducing error rate in domains where entities frequently appear, such as messaging. Lastly, we present a data analysis that highlights the improvement in user experience when voice assistants support steering use cases. | [
"Zhang, Leon",
"Lu, Jiarui",
"Moniz, Joel Ruben Antony",
"Kulkarni, Aditya",
"Piraviperumal, Dhivya",
"Tran, Tien Dung",
"Tzou, Nick",
"Yu, Hong"
] | STEER: Semantic Turn Extension-Expansion Recognition for Voice Assistants | emnlp-industry.61 | 2310.16990 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.62.bib | https://aclanthology.org/2023.emnlp-industry.62/ | @inproceedings{tan-etal-2023-self,
title = "Self-Criticism: Aligning Large Language Models with their Understanding of Helpfulness, Honesty, and Harmlessness",
author = "Tan, Xiaoyu and
Shi, Shaojie and
Qiu, Xihe and
Qu, Chao and
Qi, Zhenting and
Xu, Yinghui and
Qi, Yuan",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.62",
doi = "10.18653/v1/2023.emnlp-industry.62",
pages = "650--662",
abstract = "Recently, there has been a notable surge in the significance of large language models (LLMs) that engage in conversational-style interactions, such as ChatGPT and Claude, as they contribute significantly to the progress of artificial general intelligence (AGI). Typically, these models undergo a two-phase fine-tuning process: instruction fine-tuning (IF) and reinforcement learning from human feedback (RLHF). These methods aim to align the LLMs to be helpful, honest, and harmless (HHH). However, RLHF, which incorporates independent reward models trained on high-quality human feedback datasets, incurs high costs in terms of hardware resources and human efforts. Therefore, we explore the possibility of aligning LLMs with their own understanding of HHH through IF and in-context learning (ICL). In this study, we propose a novel framework called Self-Criticism, which allows LLMs to align themselves with HHH based on the definition they learned from a large-scale text corpus. We begin by employing IF on a given instruction set and learning HHH discrimination through few-shot ICL. Subsequently, the LLMs evaluate their own generated responses and learn to produce {``}better{''} responses based on self-judgment. Finally, the model is retrained based on the self-generated responses to distill the whole process. By analyzing our proposed method, we also find interesting connections between Self-Criticism and goal-conditioned reinforcement learning, and pseudo-labeling. Experimental results demonstrate that this method achieves nearly identical performance to RLHF in terms of both human evaluation and evaluation by other LLMs, with only a minimal alignment tax.",
}
| Recently, there has been a notable surge in the significance of large language models (LLMs) that engage in conversational-style interactions, such as ChatGPT and Claude, as they contribute significantly to the progress of artificial general intelligence (AGI). Typically, these models undergo a two-phase fine-tuning process: instruction fine-tuning (IF) and reinforcement learning from human feedback (RLHF). These methods aim to align the LLMs to be helpful, honest, and harmless (HHH). However, RLHF, which incorporates independent reward models trained on high-quality human feedback datasets, incurs high costs in terms of hardware resources and human efforts. Therefore, we explore the possibility of aligning LLMs with their own understanding of HHH through IF and in-context learning (ICL). In this study, we propose a novel framework called Self-Criticism, which allows LLMs to align themselves with HHH based on the definition they learned from a large-scale text corpus. We begin by employing IF on a given instruction set and learning HHH discrimination through few-shot ICL. Subsequently, the LLMs evaluate their own generated responses and learn to produce {``}better{''} responses based on self-judgment. Finally, the model is retrained based on the self-generated responses to distill the whole process. By analyzing our proposed method, we also find interesting connections between Self-Criticism and goal-conditioned reinforcement learning, and pseudo-labeling. Experimental results demonstrate that this method achieves nearly identical performance to RLHF in terms of both human evaluation and evaluation by other LLMs, with only a minimal alignment tax. | [
"Tan, Xiaoyu",
"Shi, Shaojie",
"Qiu, Xihe",
"Qu, Chao",
"Qi, Zhenting",
"Xu, Yinghui",
"Qi, Yuan"
] | Self-Criticism: Aligning Large Language Models with their Understanding of Helpfulness, Honesty, and Harmlessness | emnlp-industry.62 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.63.bib | https://aclanthology.org/2023.emnlp-industry.63/ | @inproceedings{fetahu-etal-2023-instructpts,
title = "{I}nstruct{PTS}: Instruction-Tuning {LLM}s for Product Title Summarization",
author = "Fetahu, Besnik and
Chen, Zhiyu and
Rokhlenko, Oleg and
Malmasi, Shervin",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.63",
doi = "10.18653/v1/2023.emnlp-industry.63",
pages = "663--674",
abstract = "E-commerce product catalogs contain billions of items. Most products have lengthy titles, as sellers pack them with product attributes to improve retrieval, and highlight key product aspects. This results in a gap between such unnatural products titles, and how customers refer to them. It also limits how e-commerce stores can use these seller-provided titles for recommendation, QA, or review summarization. Inspired by recent work on instruction-tuned LLMs, we present InstructPTS, a controllable approach for the task of Product Title Summarization (PTS). Trained using a novel instruction fine-tuning strategy, our approach is able to summarize product titles according to various criteria (e.g. number of words in a summary, inclusion of specific phrases, etc.). Extensive evaluation on a real-world e-commerce catalog shows that compared to simple fine-tuning of LLMs, our proposed approach can generate more accurate product name summaries, with an improvement of over 14 and 8 BLEU and ROUGE points, respectively.",
}
| E-commerce product catalogs contain billions of items. Most products have lengthy titles, as sellers pack them with product attributes to improve retrieval, and highlight key product aspects. This results in a gap between such unnatural products titles, and how customers refer to them. It also limits how e-commerce stores can use these seller-provided titles for recommendation, QA, or review summarization. Inspired by recent work on instruction-tuned LLMs, we present InstructPTS, a controllable approach for the task of Product Title Summarization (PTS). Trained using a novel instruction fine-tuning strategy, our approach is able to summarize product titles according to various criteria (e.g. number of words in a summary, inclusion of specific phrases, etc.). Extensive evaluation on a real-world e-commerce catalog shows that compared to simple fine-tuning of LLMs, our proposed approach can generate more accurate product name summaries, with an improvement of over 14 and 8 BLEU and ROUGE points, respectively. | [
"Fetahu, Besnik",
"Chen, Zhiyu",
"Rokhlenko, Oleg",
"Malmasi, Shervin"
] | InstructPTS: Instruction-Tuning LLMs for Product Title Summarization | emnlp-industry.63 | 2310.16361 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.64.bib | https://aclanthology.org/2023.emnlp-industry.64/ | @inproceedings{wang-etal-2023-llm4vis,
title = "{LLM}4{V}is: Explainable Visualization Recommendation using {C}hat{GPT}",
author = "Wang, Lei and
Zhang, Songheng and
Wang, Yun and
Lim, Ee-Peng and
Wang, Yong",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.64",
doi = "10.18653/v1/2023.emnlp-industry.64",
pages = "675--692",
abstract = "Data visualization is a powerful tool for exploring and communicating insights in various domains. To automate visualization choice for datasets, a task known as visualization recommendation has been proposed. Various machine-learning-based approaches have been developed for this purpose, but they often require a large corpus of dataset-visualization pairs for training and lack natural explanations for their results. To address this research gap, we propose LLM4Vis, a novel ChatGPT-based prompting approach to perform visualization recommendation and return human-like explanations using very few demonstration examples. Our approach involves feature description, demonstration example selection, explanation generation, demonstration example construction, and inference steps. To obtain demonstration examples with high-quality explanations, we propose a new explanation generation bootstrapping to iteratively refine generated explanations by considering the previous generation and template-based hint. Evaluations on the VizML dataset show that LLM4Vis outperforms or performs similarly to supervised learning models like Random Forest, Decision Tree, and MLP, in both few-shot and zero-shot settings. The qualitative evaluation also shows the effectiveness of explanations generated by LLM4Vis.",
}
| Data visualization is a powerful tool for exploring and communicating insights in various domains. To automate visualization choice for datasets, a task known as visualization recommendation has been proposed. Various machine-learning-based approaches have been developed for this purpose, but they often require a large corpus of dataset-visualization pairs for training and lack natural explanations for their results. To address this research gap, we propose LLM4Vis, a novel ChatGPT-based prompting approach to perform visualization recommendation and return human-like explanations using very few demonstration examples. Our approach involves feature description, demonstration example selection, explanation generation, demonstration example construction, and inference steps. To obtain demonstration examples with high-quality explanations, we propose a new explanation generation bootstrapping to iteratively refine generated explanations by considering the previous generation and template-based hint. Evaluations on the VizML dataset show that LLM4Vis outperforms or performs similarly to supervised learning models like Random Forest, Decision Tree, and MLP, in both few-shot and zero-shot settings. The qualitative evaluation also shows the effectiveness of explanations generated by LLM4Vis. | [
"Wang, Lei",
"Zhang, Songheng",
"Wang, Yun",
"Lim, Ee-Peng",
"Wang, Yong"
] | LLM4Vis: Explainable Visualization Recommendation using ChatGPT | emnlp-industry.64 | 2310.07652 | [
"https://github.com/demoleiwang/llm4vis"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.65.bib | https://aclanthology.org/2023.emnlp-industry.65/ | @inproceedings{aggarwal-etal-2023-dublin,
title = "{DUBLIN}: Visual Document Understanding By Language-Image Network",
author = "Aggarwal, Kriti and
Khandelwal, Aditi and
Tanmay, Kumar and
Mohammed, Owais Khan and
Liu, Qiang and
Choudhury, Monojit and
Chauhan, Hardik and
Som, Subhojit and
Chaudhary, Vishrav and
Tiwary, Saurabh",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.65",
doi = "10.18653/v1/2023.emnlp-industry.65",
pages = "693--706",
abstract = "In this paper, we present DUBLIN, a pixel-based model for visual document understanding that does not rely on OCR. DUBLIN can process both images and texts in documents just by the pixels and handle diverse document types and tasks. DUBLIN is pretrained on a large corpus of document images with novel tasks that enhance its visual and linguistic abilities. We evaluate DUBLIN on various benchmarks and show that it achieves state-of-the-art performance on extractive tasks such as DocVQA, InfoVQA, AI2D, OCR-VQA, RefExp, and CORD, as well as strong performance on abstraction datasets such as VisualMRC and text captioning. Our model demonstrates the potential of OCR-free document processing and opens new avenues for applications and research.",
}
| In this paper, we present DUBLIN, a pixel-based model for visual document understanding that does not rely on OCR. DUBLIN can process both images and texts in documents just by the pixels and handle diverse document types and tasks. DUBLIN is pretrained on a large corpus of document images with novel tasks that enhance its visual and linguistic abilities. We evaluate DUBLIN on various benchmarks and show that it achieves state-of-the-art performance on extractive tasks such as DocVQA, InfoVQA, AI2D, OCR-VQA, RefExp, and CORD, as well as strong performance on abstraction datasets such as VisualMRC and text captioning. Our model demonstrates the potential of OCR-free document processing and opens new avenues for applications and research. | [
"Aggarwal, Kriti",
"Kh",
"elwal, Aditi",
"Tanmay, Kumar",
"Mohammed, Owais Khan",
"Liu, Qiang",
"Choudhury, Monojit",
"Chauhan, Hardik",
"Som, Subhojit",
"Chaudhary, Vishrav",
"Tiwary, Saurabh"
] | DUBLIN: Visual Document Understanding By Language-Image Network | emnlp-industry.65 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.66.bib | https://aclanthology.org/2023.emnlp-industry.66/ | @inproceedings{yu-etal-2023-documentnet,
title = "{D}ocument{N}et: Bridging the Data Gap in Document Pre-training",
author = "Yu, Lijun and
Miao, Jin and
Sun, Xiaoyu and
Chen, Jiayi and
Hauptmann, Alexander and
Dai, Hanjun and
Wei, Wei",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.66",
doi = "10.18653/v1/2023.emnlp-industry.66",
pages = "707--722",
abstract = "Document understanding tasks, in particular, Visually-rich Document Entity Retrieval (VDER), have gained significant attention in recent years thanks to their broad applications in enterprise AI. However, publicly available data have been scarce for these tasks due to strict privacy constraints and high annotation costs. To make things worse, the non-overlapping entity spaces from different datasets hinder the knowledge transfer between document types. In this paper, we propose a method to collect massive-scale and weakly labeled data from the web to benefit the training of VDER models. The collected dataset, named DocumentNet, does not depend on specific document types or entity sets, making it universally applicable to all VDER tasks. The current DocumentNet consists of 30M documents spanning nearly 400 document types organized in a four-level ontology. Experiments on a set of broadly adopted VDER tasks show significant improvements when DocumentNet is incorporated into the pre-training for both classic and few-shot learning settings. With the recent emergence of large language models (LLMs), DocumentNet provides a large data source to extend their multimodal capabilities for VDER.",
}
| Document understanding tasks, in particular, Visually-rich Document Entity Retrieval (VDER), have gained significant attention in recent years thanks to their broad applications in enterprise AI. However, publicly available data have been scarce for these tasks due to strict privacy constraints and high annotation costs. To make things worse, the non-overlapping entity spaces from different datasets hinder the knowledge transfer between document types. In this paper, we propose a method to collect massive-scale and weakly labeled data from the web to benefit the training of VDER models. The collected dataset, named DocumentNet, does not depend on specific document types or entity sets, making it universally applicable to all VDER tasks. The current DocumentNet consists of 30M documents spanning nearly 400 document types organized in a four-level ontology. Experiments on a set of broadly adopted VDER tasks show significant improvements when DocumentNet is incorporated into the pre-training for both classic and few-shot learning settings. With the recent emergence of large language models (LLMs), DocumentNet provides a large data source to extend their multimodal capabilities for VDER. | [
"Yu, Lijun",
"Miao, Jin",
"Sun, Xiaoyu",
"Chen, Jiayi",
"Hauptmann, Alex",
"er",
"Dai, Hanjun",
"Wei, Wei"
] | DocumentNet: Bridging the Data Gap in Document Pre-training | emnlp-industry.66 | 2306.08937 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.67.bib | https://aclanthology.org/2023.emnlp-industry.67/ | @inproceedings{kim-etal-2023-relevance,
title = "Relevance-assisted Generation for Robust Zero-shot Retrieval",
author = "Kim, Jihyuk and
Kim, Minsoo and
Park, Joonsuk and
Hwang, Seung-won",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.67",
doi = "10.18653/v1/2023.emnlp-industry.67",
pages = "723--731",
abstract = "Zero-shot retrieval tasks such as the BEIR benchmark reveal out-of-domain generalization as a key weakness of high-performance dense retrievers. As a solution, domain adaptation for dense retrievers has been actively studied. A notable approach is synthesizing domain-specific data, by generating pseudo queries (PQ), for fine-tuning with domain-specific relevance between PQ and documents. Our contribution is showing that key biases can cause sampled PQ to be irrelevant, negatively contributing to generalization. We propose to preempt their generation, by dividing the generation into simpler subtasks, of generating relevance explanations and guiding the generation to avoid negative generalization. Experiment results show that our proposed approach is more robust to domain shifts, validated on challenging BEIR zero-shot retrieval tasks.",
}
| Zero-shot retrieval tasks such as the BEIR benchmark reveal out-of-domain generalization as a key weakness of high-performance dense retrievers. As a solution, domain adaptation for dense retrievers has been actively studied. A notable approach is synthesizing domain-specific data, by generating pseudo queries (PQ), for fine-tuning with domain-specific relevance between PQ and documents. Our contribution is showing that key biases can cause sampled PQ to be irrelevant, negatively contributing to generalization. We propose to preempt their generation, by dividing the generation into simpler subtasks, of generating relevance explanations and guiding the generation to avoid negative generalization. Experiment results show that our proposed approach is more robust to domain shifts, validated on challenging BEIR zero-shot retrieval tasks. | [
"Kim, Jihyuk",
"Kim, Minsoo",
"Park, Joonsuk",
"Hwang, Seung-won"
] | Relevance-assisted Generation for Robust Zero-shot Retrieval | emnlp-industry.67 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.68.bib | https://aclanthology.org/2023.emnlp-industry.68/ | @inproceedings{jain-etal-2023-much,
title = "Too much of product information : Don{'}t worry, let{'}s look for evidence!",
author = "Jain, Aryan and
Rana, Jitenkumar and
Aggarwal, Chetan",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.68",
doi = "10.18653/v1/2023.emnlp-industry.68",
pages = "732--738",
abstract = "Product question answering (PQA) aims to provide an instant response to customer questions posted on shopping message boards, social media, brand websites and retail stores. In this paper, we propose a distantly supervised solution to answer customer questions by using product information. Auto-answering questions using product information poses two main challenges:(i) labelled data is not readily available (ii)lengthy product information requires attending to various parts of the text to answer the question. To this end, we first propose a novel distant supervision based NLI model to prepare training data without any manual efforts. To deal with lengthy context, we factorize answer generation into two sub-problems. First, given product information, model extracts evidence spans relevant to question. Then, model leverages evidence spans to generate answer. Further, we propose two novelties in fine-tuning approach: (i) First, we jointly fine-tune model for both the tasks in end-to-end manner and showcase that it outperforms standard multi-task fine-tuning. (ii) Next, we introduce an auxiliary contrastive loss for evidence extraction. We show that combination of these two ideas achieves an absolute improvement of 6{\%} in accuracy (human evaluation) over baselines.",
}
| Product question answering (PQA) aims to provide an instant response to customer questions posted on shopping message boards, social media, brand websites and retail stores. In this paper, we propose a distantly supervised solution to answer customer questions by using product information. Auto-answering questions using product information poses two main challenges:(i) labelled data is not readily available (ii)lengthy product information requires attending to various parts of the text to answer the question. To this end, we first propose a novel distant supervision based NLI model to prepare training data without any manual efforts. To deal with lengthy context, we factorize answer generation into two sub-problems. First, given product information, model extracts evidence spans relevant to question. Then, model leverages evidence spans to generate answer. Further, we propose two novelties in fine-tuning approach: (i) First, we jointly fine-tune model for both the tasks in end-to-end manner and showcase that it outperforms standard multi-task fine-tuning. (ii) Next, we introduce an auxiliary contrastive loss for evidence extraction. We show that combination of these two ideas achieves an absolute improvement of 6{\%} in accuracy (human evaluation) over baselines. | [
"Jain, Aryan",
"Rana, Jitenkumar",
"Aggarwal, Chetan"
] | Too much of product information : Don't worry, let's look for evidence! | emnlp-industry.68 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.69.bib | https://aclanthology.org/2023.emnlp-industry.69/ | @inproceedings{yu-etal-2023-harnessing,
title = "Harnessing {LLM}s for Temporal Data - A Study on Explainable Financial Time Series Forecasting",
author = "Yu, Xinli and
Chen, Zheng and
Lu, Yanbin",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.69",
doi = "10.18653/v1/2023.emnlp-industry.69",
pages = "739--753",
abstract = "Applying machine learning to financial time series has been an active area of industrial research enabling innovation in market insights, risk management, strategic decision-making, and policy formation. This paper explores the novel use of Large Language Models (LLMs) for explainable financial time series forecasting, addressing challenges in cross-sequence reasoning, multi-modal data integration, and result interpretation that are inherent in traditional approaches. Focusing on NASDAQ-100 stocks, we utilize public historical stock data, company metadata, and economic/financial news. Our experiments employ GPT-4 for zero-shot/few-shot inference and Open LLaMA for instruction-based fine-tuning. The study demonstrates LLMs{'} ability to generate well-reasoned decisions by leveraging cross-sequence information and extracting insights from text and price time series. We show that our LLM-based approach outperforms classic ARMA-GARCH and gradient-boosting tree models. Furthermore, fine-tuned public LLMs, such as Open-LLaMA, can generate reasonable and explainable forecasts, although they underperform compared to GPT-4.",
}
| Applying machine learning to financial time series has been an active area of industrial research enabling innovation in market insights, risk management, strategic decision-making, and policy formation. This paper explores the novel use of Large Language Models (LLMs) for explainable financial time series forecasting, addressing challenges in cross-sequence reasoning, multi-modal data integration, and result interpretation that are inherent in traditional approaches. Focusing on NASDAQ-100 stocks, we utilize public historical stock data, company metadata, and economic/financial news. Our experiments employ GPT-4 for zero-shot/few-shot inference and Open LLaMA for instruction-based fine-tuning. The study demonstrates LLMs{'} ability to generate well-reasoned decisions by leveraging cross-sequence information and extracting insights from text and price time series. We show that our LLM-based approach outperforms classic ARMA-GARCH and gradient-boosting tree models. Furthermore, fine-tuned public LLMs, such as Open-LLaMA, can generate reasonable and explainable forecasts, although they underperform compared to GPT-4. | [
"Yu, Xinli",
"Chen, Zheng",
"Lu, Yanbin"
] | Harnessing LLMs for Temporal Data - A Study on Explainable Financial Time Series Forecasting | emnlp-industry.69 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.70.bib | https://aclanthology.org/2023.emnlp-industry.70/ | @inproceedings{nguyen-etal-2023-vigptqa,
title = "{V}i{GPTQA} - State-of-the-Art {LLM}s for {V}ietnamese Question Answering: System Overview, Core Models Training, and Evaluations",
author = "Nguyen, Minh Thuan and
Tran, Khanh Tung and
Nguyen, Nhu Van and
Vu, Xuan-Son",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.70",
doi = "10.18653/v1/2023.emnlp-industry.70",
pages = "754--764",
abstract = "Large language models (LLMs) and their applications in low-resource languages (such as in Vietnamese) are limited due to lack of training data and benchmarking datasets. This paper introduces a practical real-world implementation of a question answering system for Vietnamese, called ViGPTQA, leveraging the power of LLM. Since there is no effective LLM in Vietnamese to date, we also propose, evaluate, and open-source an instruction-tuned LLM for Vietnamese, named ViGPT. ViGPT demonstrates exceptional performances, especially on real-world scenarios. We curate a new set of benchmark datasets that encompass both AI and human-generated data, providing a comprehensive evaluation framework for Vietnamese LLMs. By achieving state-of-the-art results and approaching other multilingual LLMs, our instruction-tuned LLM underscores the need for dedicated Vietnamese-specific LLMs. Our open-source model supports customized and privacy-fulfilled Vietnamese language processing systems.",
}
| Large language models (LLMs) and their applications in low-resource languages (such as in Vietnamese) are limited due to lack of training data and benchmarking datasets. This paper introduces a practical real-world implementation of a question answering system for Vietnamese, called ViGPTQA, leveraging the power of LLM. Since there is no effective LLM in Vietnamese to date, we also propose, evaluate, and open-source an instruction-tuned LLM for Vietnamese, named ViGPT. ViGPT demonstrates exceptional performances, especially on real-world scenarios. We curate a new set of benchmark datasets that encompass both AI and human-generated data, providing a comprehensive evaluation framework for Vietnamese LLMs. By achieving state-of-the-art results and approaching other multilingual LLMs, our instruction-tuned LLM underscores the need for dedicated Vietnamese-specific LLMs. Our open-source model supports customized and privacy-fulfilled Vietnamese language processing systems. | [
"Nguyen, Minh Thuan",
"Tran, Khanh Tung",
"Nguyen, Nhu Van",
"Vu, Xuan-Son"
] | ViGPTQA - State-of-the-Art LLMs for Vietnamese Question Answering: System Overview, Core Models Training, and Evaluations | emnlp-industry.70 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.71.bib | https://aclanthology.org/2023.emnlp-industry.71/ | @inproceedings{jo-etal-2023-integrated,
title = "An Integrated Search System for {K}orea Weather Data",
author = "Jo, Jinkyung and
Ki, Dayeon and
Yoon, Soyoung and
Seo, Minjoon",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.71",
doi = "10.18653/v1/2023.emnlp-industry.71",
pages = "765--774",
abstract = "We introduce WeatherSearch, an integrated search system deployed at the Korea Meteorological Administration (KMA). WeatherSearch enables users to retrieve all the relevant data for weather forecasting from a massive weather database with simple natural language queries. We carefully design and conduct multiple expert surveys and interviews for template creation and apply data augmentation techniques including template filling to collect 4 million data points with minimal human labors. We then finetune mT5 on the collected dataset and achieve an average MRR of 0.66 and an average Recall of 0.82. We also discuss weather-data-specific characteristics that should be taken into account for creating such a system. We hope our paper serves as a simple and effective guideline for those designing similar systems in other regions of the world.",
}
| We introduce WeatherSearch, an integrated search system deployed at the Korea Meteorological Administration (KMA). WeatherSearch enables users to retrieve all the relevant data for weather forecasting from a massive weather database with simple natural language queries. We carefully design and conduct multiple expert surveys and interviews for template creation and apply data augmentation techniques including template filling to collect 4 million data points with minimal human labors. We then finetune mT5 on the collected dataset and achieve an average MRR of 0.66 and an average Recall of 0.82. We also discuss weather-data-specific characteristics that should be taken into account for creating such a system. We hope our paper serves as a simple and effective guideline for those designing similar systems in other regions of the world. | [
"Jo, Jinkyung",
"Ki, Dayeon",
"Yoon, Soyoung",
"Seo, Minjoon"
] | An Integrated Search System for Korea Weather Data | emnlp-industry.71 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.72.bib | https://aclanthology.org/2023.emnlp-industry.72/ | @inproceedings{li-etal-2023-adaptive,
title = "Adaptive Hyper-parameter Learning for Deep Semantic Retrieval",
author = "Li, Mingming and
Yuan, Chunyuan and
Wang, Huimu and
Wang, Peng and
Zhuo, Jingwei and
Wang, Binbin and
Liu, Lin and
Xu, Sulong",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.72",
doi = "10.18653/v1/2023.emnlp-industry.72",
pages = "775--782",
abstract = "Deep semantic retrieval has achieved remarkable success in online E-commerce applications. The majority of methods aim to distinguish positive items and negative items for each query by utilizing margin loss or softmax loss. Despite their decent performance, these methods are highly sensitive to hyper-parameters, i.e., margin and temperature $\tau$, which measure the similarity of negative pairs and affect the distribution of items in metric space. How to design and choose adaptively parameters for different pairs is still an open challenge. Recently several methods have attempted to alleviate the above problem by learning each parameter through trainable/statistical methods in the recommendation. We argue that those are not suitable for retrieval scenarios, due to the agnosticism and diversity of the queries. To fully overcome this limitation, we propose a novel adaptive metric learning method that designs a simple and universal hyper-parameter-free learning method to improve the performance of retrieval. Specifically, we first propose a method that adaptive obtains the hyper-parameters by relying on the batch similarity without fixed or extra-trainable hyper-parameters. Subsequently, we adopt a symmetric metric learning method to mitigate model collapse issues. Furthermore, the proposed method is general and sheds a highlight on other fields. Extensive experiments demonstrate our method significantly outperforms previous methods on a real-world dataset, highlighting the superiority and effectiveness of our method. This method has been successfully deployed on an online E-commerce search platform and brought substantial economic benefits.",
}
| Deep semantic retrieval has achieved remarkable success in online E-commerce applications. The majority of methods aim to distinguish positive items and negative items for each query by utilizing margin loss or softmax loss. Despite their decent performance, these methods are highly sensitive to hyper-parameters, i.e., margin and temperature $\tau$, which measure the similarity of negative pairs and affect the distribution of items in metric space. How to design and choose adaptively parameters for different pairs is still an open challenge. Recently several methods have attempted to alleviate the above problem by learning each parameter through trainable/statistical methods in the recommendation. We argue that those are not suitable for retrieval scenarios, due to the agnosticism and diversity of the queries. To fully overcome this limitation, we propose a novel adaptive metric learning method that designs a simple and universal hyper-parameter-free learning method to improve the performance of retrieval. Specifically, we first propose a method that adaptive obtains the hyper-parameters by relying on the batch similarity without fixed or extra-trainable hyper-parameters. Subsequently, we adopt a symmetric metric learning method to mitigate model collapse issues. Furthermore, the proposed method is general and sheds a highlight on other fields. Extensive experiments demonstrate our method significantly outperforms previous methods on a real-world dataset, highlighting the superiority and effectiveness of our method. This method has been successfully deployed on an online E-commerce search platform and brought substantial economic benefits. | [
"Li, Mingming",
"Yuan, Chunyuan",
"Wang, Huimu",
"Wang, Peng",
"Zhuo, Jingwei",
"Wang, Binbin",
"Liu, Lin",
"Xu, Sulong"
] | Adaptive Hyper-parameter Learning for Deep Semantic Retrieval | emnlp-industry.72 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.73.bib | https://aclanthology.org/2023.emnlp-industry.73/ | @inproceedings{han-etal-2023-sample,
title = "On Sample-Efficient Code Generation",
author = "Han, Hojae and
Kim, Yu Jin and
Kim, Byoungjip and
Lee, Youngwon and
Lee, Kyungjae and
Lee, Kyungmin and
Lee, Moontae and
Bae, Kyunghoon and
Hwang, Seung-won",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.73",
doi = "10.18653/v1/2023.emnlp-industry.73",
pages = "783--791",
abstract = "Large language models often struggle to predict runtime behavior in code generation tasks, leading to a reliance on rejection sampling (best-of-n) to generate multiple code snippets then select the best. Our distinction is reducing sampling costs, without compromising generation quality. We introduce EFFICODE, a novel framework that prioritizes sampling on test problems that models can solve. We show how EFFICODE estimates solvability to optimize computational costs during multiple sampling. Based on empirical evidence, EFFICODE consistently demonstrates reduced sampling budgets while maintaining comparable code generation performance, especially when problems are challenging. In addition, utilizing EFFICODE to rank sampled code snippets also shows its effectiveness in answer code selection for reducing temporal costs, by not requiring any execution or test case generation.",
}
| Large language models often struggle to predict runtime behavior in code generation tasks, leading to a reliance on rejection sampling (best-of-n) to generate multiple code snippets then select the best. Our distinction is reducing sampling costs, without compromising generation quality. We introduce EFFICODE, a novel framework that prioritizes sampling on test problems that models can solve. We show how EFFICODE estimates solvability to optimize computational costs during multiple sampling. Based on empirical evidence, EFFICODE consistently demonstrates reduced sampling budgets while maintaining comparable code generation performance, especially when problems are challenging. In addition, utilizing EFFICODE to rank sampled code snippets also shows its effectiveness in answer code selection for reducing temporal costs, by not requiring any execution or test case generation. | [
"Han, Hojae",
"Kim, Yu Jin",
"Kim, Byoungjip",
"Lee, Youngwon",
"Lee, Kyungjae",
"Lee, Kyungmin",
"Lee, Moontae",
"Bae, Kyunghoon",
"Hwang, Seung-won"
] | On Sample-Efficient Code Generation | emnlp-industry.73 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.emnlp-industry.74.bib | https://aclanthology.org/2023.emnlp-industry.74/ | @inproceedings{cheng-etal-2023-batch,
title = "Batch Prompting: Efficient Inference with Large Language Model {API}s",
author = "Cheng, Zhoujun and
Kasai, Jungo and
Yu, Tao",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.74",
doi = "10.18653/v1/2023.emnlp-industry.74",
pages = "792--810",
abstract = "Performing inference on large volumes of samples with large language models (LLMs) can be computationally and financially costly in industry and real-world use. We propose batch prompting, a simple yet effective prompting approach that enables the LLM to run inference in batches, instead of one sample at a time. Our method reduces both token and time costs while retaining downstream performance. We theoretically demonstrate that under a few-shot in-context learning setting, the inference costs decrease almost inverse linearly with the number of samples in each batch. We extensively validate the effectiveness of batch prompting on ten datasets across commonsense QA, arithmetic reasoning, and NLI/NLU: batch prompting significantly (up to $5\times$ with six samples in batch) reduces the LLM (Codex) inference token and time costs while achieving better or comparable performance. For state-of-the-art Chat-based LLMs, e.g., GPT-3.5 and GPT-4, we show the benefits of batch prompting also hold. Further analysis shows that the number of samples in each batch and the complexity of tasks affect its performance. Moreover, batch prompting can be applied across different reasoning methods using LLMs. Our code is released at the site https://github.com/xlang-ai/batch-prompting.",
}
| Performing inference on large volumes of samples with large language models (LLMs) can be computationally and financially costly in industry and real-world use. We propose batch prompting, a simple yet effective prompting approach that enables the LLM to run inference in batches, instead of one sample at a time. Our method reduces both token and time costs while retaining downstream performance. We theoretically demonstrate that under a few-shot in-context learning setting, the inference costs decrease almost inverse linearly with the number of samples in each batch. We extensively validate the effectiveness of batch prompting on ten datasets across commonsense QA, arithmetic reasoning, and NLI/NLU: batch prompting significantly (up to $5\times$ with six samples in batch) reduces the LLM (Codex) inference token and time costs while achieving better or comparable performance. For state-of-the-art Chat-based LLMs, e.g., GPT-3.5 and GPT-4, we show the benefits of batch prompting also hold. Further analysis shows that the number of samples in each batch and the complexity of tasks affect its performance. Moreover, batch prompting can be applied across different reasoning methods using LLMs. Our code is released at the site https://github.com/xlang-ai/batch-prompting. | [
"Cheng, Zhoujun",
"Kasai, Jungo",
"Yu, Tao"
] | Batch Prompting: Efficient Inference with Large Language Model APIs | emnlp-industry.74 | 2301.08721 | [
"https://github.com/hkunlp/batch-prompting"
] | https://huggingface.co/papers/2301.08721 | 0 | 0 | 0 | 3 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.emnlp-industry.75.bib | https://aclanthology.org/2023.emnlp-industry.75/ | @inproceedings{chen-etal-2023-graph,
title = "Graph Meets {LLM}: A Novel Approach to Collaborative Filtering for Robust Conversational Understanding",
author = "Chen, Zheng and
Jiang, Ziyan and
Yang, Fan and
Cho, Eunah and
Fan, Xing and
Huang, Xiaojiang and
Lu, Yanbin and
Galstyan, Aram",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.75",
doi = "10.18653/v1/2023.emnlp-industry.75",
pages = "811--819",
abstract = "A Personalized Query Rewriting system strives to minimize defective queries to ensure robust conversational functionality by considering individual user behavior and preferences. It{'}s designed as a search-based system, maintaining a user index of past successful interactions with the conversational AI. However, this method faces challenges with unseen interactions, which refers to novel user interactions not covered by the user{'}s historical index. This paper introduces our Collaborative Query Rewriting approach, which utilizes underlying topological information to assist in rewriting defective queries arising from unseen user interactions. This approach begins by constructing a {``}User Feedback Interaction Graph{''} (FIG) using historical user-entity interactions. Subsequently, we traverse through the graph edges to establish an enhanced user index, referred to as the {``}collaborative user index{''}. This paper then further explores the use of Large Language Models (LLMs) in conjunction with graph traversal, leading to a significant increase in index coverage for unseen interactions. The effectiveness of our proposed approach has been proven through experiments on a large-scale real-world dataset and online A/B experiments.",
}
| A Personalized Query Rewriting system strives to minimize defective queries to ensure robust conversational functionality by considering individual user behavior and preferences. It{'}s designed as a search-based system, maintaining a user index of past successful interactions with the conversational AI. However, this method faces challenges with unseen interactions, which refers to novel user interactions not covered by the user{'}s historical index. This paper introduces our Collaborative Query Rewriting approach, which utilizes underlying topological information to assist in rewriting defective queries arising from unseen user interactions. This approach begins by constructing a {``}User Feedback Interaction Graph{''} (FIG) using historical user-entity interactions. Subsequently, we traverse through the graph edges to establish an enhanced user index, referred to as the {``}collaborative user index{''}. This paper then further explores the use of Large Language Models (LLMs) in conjunction with graph traversal, leading to a significant increase in index coverage for unseen interactions. The effectiveness of our proposed approach has been proven through experiments on a large-scale real-world dataset and online A/B experiments. | [
"Chen, Zheng",
"Jiang, Ziyan",
"Yang, Fan",
"Cho, Eunah",
"Fan, Xing",
"Huang, Xiaojiang",
"Lu, Yanbin",
"Galstyan, Aram"
] | Graph Meets LLM: A Novel Approach to Collaborative Filtering for Robust Conversational Understanding | emnlp-industry.75 | 2305.14449 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.76.bib | https://aclanthology.org/2023.emnlp-industry.76/ | @inproceedings{sun-etal-2023-delphi,
title = "{DELPHI}: Data for Evaluating {LLM}s{'} Performance in Handling Controversial Issues",
author = "Sun, David and
Abzaliev, Artem and
Kotek, Hadas and
Klein, Christopher and
Xiu, Zidi and
Williams, Jason",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.76",
doi = "10.18653/v1/2023.emnlp-industry.76",
pages = "820--827",
abstract = "Controversy is a reflection of our zeitgeist, and an important aspect to any discourse. The rise of large language models (LLMs) as conversational systems has increased public reliance on these systems for answers to their various questions. Consequently, it is crucial to systematically examine how these models respond to questions that pertaining to ongoing debates. However, few such datasets exist in providing human-annotated labels reflecting the contemporary discussions. To foster research in this area, we propose a novel construction of a controversial questions dataset, expanding upon the publicly released Quora Question Pairs Dataset. This dataset presents challenges concerning knowledge recency, safety, fairness, and bias. We evaluate different LLMs using a subset of this dataset, illuminating how they handle controversial issues and the stances they adopt. This research ultimately contributes to our understanding of LLMs{'} interaction with controversial issues, paving the way for improvements in their comprehension and handling of complex societal debates.",
}
| Controversy is a reflection of our zeitgeist, and an important aspect to any discourse. The rise of large language models (LLMs) as conversational systems has increased public reliance on these systems for answers to their various questions. Consequently, it is crucial to systematically examine how these models respond to questions that pertaining to ongoing debates. However, few such datasets exist in providing human-annotated labels reflecting the contemporary discussions. To foster research in this area, we propose a novel construction of a controversial questions dataset, expanding upon the publicly released Quora Question Pairs Dataset. This dataset presents challenges concerning knowledge recency, safety, fairness, and bias. We evaluate different LLMs using a subset of this dataset, illuminating how they handle controversial issues and the stances they adopt. This research ultimately contributes to our understanding of LLMs{'} interaction with controversial issues, paving the way for improvements in their comprehension and handling of complex societal debates. | [
"Sun, David",
"Abzaliev, Artem",
"Kotek, Hadas",
"Klein, Christopher",
"Xiu, Zidi",
"Williams, Jason"
] | DELPHI: Data for Evaluating LLMs' Performance in Handling Controversial Issues | emnlp-industry.76 | 2310.18130 | [
"https://github.com/zidixiu/delphi"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.emnlp-industry.77.bib | https://aclanthology.org/2023.emnlp-industry.77/ | @inproceedings{haq-etal-2023-angel,
title = "Angel: Enterprise Search System for the Non-Profit Industry",
author = "Haq, Saiful and
Sharma, Ashutosh and
Bhattacharyya, Pushpak",
editor = "Wang, Mingxuan and
Zitouni, Imed",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing: Industry Track",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-industry.77",
doi = "10.18653/v1/2023.emnlp-industry.77",
pages = "828--835",
abstract = "Non-profit industry need a system for accurately matching fund-seekers (e.g., AMERICAN NATIONAL RED CROSS) with fund-givers (e.g., BILL AND MELINDA GATES FOUNDATION) aligned in cause (e.g., cancer) and target beneficiary group (e.g., children). In this paper, we create an enterprise search system {``}ANGEL{''} for the non-profit industry that takes a fund-giver{'}s mission description as input and returns a ranked list of fund-seekers as output, and vice-versa. ANGEL employs ColBERT, a neural information retrieval model, which we enhance by exploiting the two techniques of (a) Syntax-aware local attention (SLA) to combine syntactic information in the mission description with multi-head self-attention and (b) Dense Pseudo Relevance Feedback (DPRF) for augmentation of short mission descriptions. We create a mapping dictionary {``}non-profit-dict{''} to curate a {``}non-profit-search database{''} containing information on 594K fund-givers and 194K fund-seekers from IRS-990 filings for the non-profit industry search engines . We also curate a {``}non-profit-evaluation{''} dataset containing scored matching between 463 fund-givers and 100 fund-seekers. The research is in collaboration with a philanthropic startup that identifies itself as an {``}AI matching platform, fundraising assistant, and philanthropy search base.{''} Domain experts at the philanthropic startup annotate the non-profit evaluation dataset and continuously evaluate the performance of ANGEL. ANGEL achieves an improvement of 0.14 MAP@10 and 0.16 MRR@10 over the state-of-the-art baseline on the non-profit evaluation dataset. To the best of our knowledge, ours is the first effort at building an enterprise search engine based on neural information retrieval for the non-profit industry.",
}
| Non-profit industry need a system for accurately matching fund-seekers (e.g., AMERICAN NATIONAL RED CROSS) with fund-givers (e.g., BILL AND MELINDA GATES FOUNDATION) aligned in cause (e.g., cancer) and target beneficiary group (e.g., children). In this paper, we create an enterprise search system {``}ANGEL{''} for the non-profit industry that takes a fund-giver{'}s mission description as input and returns a ranked list of fund-seekers as output, and vice-versa. ANGEL employs ColBERT, a neural information retrieval model, which we enhance by exploiting the two techniques of (a) Syntax-aware local attention (SLA) to combine syntactic information in the mission description with multi-head self-attention and (b) Dense Pseudo Relevance Feedback (DPRF) for augmentation of short mission descriptions. We create a mapping dictionary {``}non-profit-dict{''} to curate a {``}non-profit-search database{''} containing information on 594K fund-givers and 194K fund-seekers from IRS-990 filings for the non-profit industry search engines . We also curate a {``}non-profit-evaluation{''} dataset containing scored matching between 463 fund-givers and 100 fund-seekers. The research is in collaboration with a philanthropic startup that identifies itself as an {``}AI matching platform, fundraising assistant, and philanthropy search base.{''} Domain experts at the philanthropic startup annotate the non-profit evaluation dataset and continuously evaluate the performance of ANGEL. ANGEL achieves an improvement of 0.14 MAP@10 and 0.16 MRR@10 over the state-of-the-art baseline on the non-profit evaluation dataset. To the best of our knowledge, ours is the first effort at building an enterprise search engine based on neural information retrieval for the non-profit industry. | [
"Haq, Saiful",
"Sharma, Ashutosh",
"Bhattacharyya, Pushpak"
] | Angel: Enterprise Search System for the Non-Profit Industry | emnlp-industry.77 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.1.bib | https://aclanthology.org/2023.findings-emnlp.1/ | @inproceedings{manevich-etal-2023-multi,
title = "Multi Document Summarization Evaluation in the Presence of Damaging Content",
author = "Manevich, Avshalom and
Carmel, David and
Cohen, Nachshon and
Kravi, Elad and
Shapira, Ori",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.1",
doi = "10.18653/v1/2023.findings-emnlp.1",
pages = "1--12",
abstract = "In the Multi-document summarization (MDS) task, a summary is produced for a given set of documents. A recent line of research introduced the concept of damaging documents, denoting documents that should not be exposed to readers due to various reasons. In the presence of damaging documents, a summarizer is ideally expected to exclude damaging content in its output. Existing metrics evaluate a summary based on aspects such as relevance and consistency with the source documents. We propose to additionally measure the ability of MDS systems to properly handle damaging documents in their input set. To that end, we offer two novel metrics based on lexical similarity and language model likelihood. A set of experiments demonstrates the effectiveness of our metrics in measuring the ability of MDS systems to summarize a set of documents while eliminating damaging content from their summaries.",
}
| In the Multi-document summarization (MDS) task, a summary is produced for a given set of documents. A recent line of research introduced the concept of damaging documents, denoting documents that should not be exposed to readers due to various reasons. In the presence of damaging documents, a summarizer is ideally expected to exclude damaging content in its output. Existing metrics evaluate a summary based on aspects such as relevance and consistency with the source documents. We propose to additionally measure the ability of MDS systems to properly handle damaging documents in their input set. To that end, we offer two novel metrics based on lexical similarity and language model likelihood. A set of experiments demonstrates the effectiveness of our metrics in measuring the ability of MDS systems to summarize a set of documents while eliminating damaging content from their summaries. | [
"Manevich, Avshalom",
"Carmel, David",
"Cohen, Nachshon",
"Kravi, Elad",
"Shapira, Ori"
] | Multi Document Summarization Evaluation in the Presence of Damaging Content | findings-emnlp.1 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.2.bib | https://aclanthology.org/2023.findings-emnlp.2/ | @inproceedings{gao-etal-2023-guiding,
title = "Guiding {AMR} Parsing with Reverse Graph Linearization",
author = "Gao, Bofei and
Chen, Liang and
Wang, Peiyi and
Sui, Zhifang and
Chang, Baobao",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.2",
doi = "10.18653/v1/2023.findings-emnlp.2",
pages = "13--26",
abstract = "Abstract Meaning Representation (AMR) parsing aims to extract an abstract semantic graph from a given sentence. The sequence-to-sequence approaches, which linearize the semantic graph into a sequence of nodes and edges and generate the linearized graph directly, have achieved good performance. However, we observed that these approaches suffer from structure loss accumulation during the decoding process, leading to a much lower F1-score for nodes and edges decoded later compared to those decoded earlier. To address this issue, we propose a novel Reverse Graph Linearization (RGL) enhanced framework. RGL defines both default and reverse linearization orders of an AMR graph, where most structures at the back part of the default order appear at the front part of the reversed order and vice versa. RGL incorporates the reversed linearization to the original AMR parser through a two-pass self-distillation mechanism, which guides the model when generating the default linearizations. Our analysis shows that our proposed method significantly mitigates the problem of structure loss accumulation, outperforming the previously best AMR parsing model by 0.8 and 0.5 Smatch scores on the AMR 2.0 and AMR 3.0 dataset, respectively. The code are available at \url{https://github.com/pkunlp-icler/AMR_reverse_graph_linearization}.",
}
| Abstract Meaning Representation (AMR) parsing aims to extract an abstract semantic graph from a given sentence. The sequence-to-sequence approaches, which linearize the semantic graph into a sequence of nodes and edges and generate the linearized graph directly, have achieved good performance. However, we observed that these approaches suffer from structure loss accumulation during the decoding process, leading to a much lower F1-score for nodes and edges decoded later compared to those decoded earlier. To address this issue, we propose a novel Reverse Graph Linearization (RGL) enhanced framework. RGL defines both default and reverse linearization orders of an AMR graph, where most structures at the back part of the default order appear at the front part of the reversed order and vice versa. RGL incorporates the reversed linearization to the original AMR parser through a two-pass self-distillation mechanism, which guides the model when generating the default linearizations. Our analysis shows that our proposed method significantly mitigates the problem of structure loss accumulation, outperforming the previously best AMR parsing model by 0.8 and 0.5 Smatch scores on the AMR 2.0 and AMR 3.0 dataset, respectively. The code are available at \url{https://github.com/pkunlp-icler/AMR_reverse_graph_linearization}. | [
"Gao, Bofei",
"Chen, Liang",
"Wang, Peiyi",
"Sui, Zhifang",
"Chang, Baobao"
] | Guiding AMR Parsing with Reverse Graph Linearization | findings-emnlp.2 | 2310.08860 | [
"https://github.com/pkunlp-icler/amr_reverse_graph_linearization"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.3.bib | https://aclanthology.org/2023.findings-emnlp.3/ | @inproceedings{li-etal-2023-translate,
title = "Translate the Beauty in Songs: Jointly Learning to Align Melody and Translate Lyrics",
author = "Li, Chengxi and
Fan, Kai and
Bu, Jiajun and
Chen, Boxing and
Huang, Zhongqiang and
Yu, Zhi",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.3",
doi = "10.18653/v1/2023.findings-emnlp.3",
pages = "27--39",
abstract = "Song translation requires both translation of lyrics and alignment of music notes so that the resulting verse can be sung to the accompanying melody, which is a challenging problem that has attracted some interests in different aspects of the translation process. In this paper, we propose Lyrics-Melody Translation with Adaptive Grouping (LTAG), a holistic solution to automatic song translation by jointly modeling lyric translation and lyrics-melody alignment. It is a novel encoder-decoder framework that can simultaneously translate the source lyrics and determine the number of aligned notes at each decoding step through an adaptive note grouping module. To address data scarcity, we commissioned a small amount of training data annotated specifically for this task and used large amounts of automatic training data through back-translation. Experiments conducted on an English-Chinese song translation data set show the effectiveness of our model in both automatic and human evaluations.",
}
| Song translation requires both translation of lyrics and alignment of music notes so that the resulting verse can be sung to the accompanying melody, which is a challenging problem that has attracted some interests in different aspects of the translation process. In this paper, we propose Lyrics-Melody Translation with Adaptive Grouping (LTAG), a holistic solution to automatic song translation by jointly modeling lyric translation and lyrics-melody alignment. It is a novel encoder-decoder framework that can simultaneously translate the source lyrics and determine the number of aligned notes at each decoding step through an adaptive note grouping module. To address data scarcity, we commissioned a small amount of training data annotated specifically for this task and used large amounts of automatic training data through back-translation. Experiments conducted on an English-Chinese song translation data set show the effectiveness of our model in both automatic and human evaluations. | [
"Li, Chengxi",
"Fan, Kai",
"Bu, Jiajun",
"Chen, Boxing",
"Huang, Zhongqiang",
"Yu, Zhi"
] | Translate the Beauty in Songs: Jointly Learning to Align Melody and Translate Lyrics | findings-emnlp.3 | 2303.15705 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.4.bib | https://aclanthology.org/2023.findings-emnlp.4/ | @inproceedings{madhani-etal-2023-aksharantar,
title = "Aksharantar: Open {I}ndic-language Transliteration datasets and models for the Next Billion Users",
author = "Madhani, Yash and
Parthan, Sushane and
Bedekar, Priyanka and
Nc, Gokul and
Khapra, Ruchi and
Kunchukuttan, Anoop and
Kumar, Pratyush and
Khapra, Mitesh",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.4",
doi = "10.18653/v1/2023.findings-emnlp.4",
pages = "40--57",
abstract = "Transliteration is very important in the Indian language context due to the usage of multiple scripts and the widespread use of romanized inputs. However, few training and evaluation sets are publicly available. We introduce Aksharantar, the largest publicly available transliteration dataset for Indian languages created by mining from monolingual and parallel corpora, as well as collecting data from human annotators. The dataset contains 26 million transliteration pairs for 21 Indic languages from 3 language families using 12 scripts. Aksharantar is 21 times larger than existing datasets and is the first publicly available dataset for 7 languages and 1 language family. We also introduce a test set of 103k word pairs for 19 languages that enables a fine-grained analysis of transliteration models on native origin words, foreign words, frequent words, and rare words. Using the training set, we trained IndicXlit, a multilingual transliteration model that improves accuracy by 15{\%} on the Dakshina test set, and establishes strong baselines on the Aksharantar testset introduced in this work. The models, mining scripts, transliteration guidelines, and datasets are available at https://github.com/AI4Bharat/IndicXlit under open-source licenses.",
}
| Transliteration is very important in the Indian language context due to the usage of multiple scripts and the widespread use of romanized inputs. However, few training and evaluation sets are publicly available. We introduce Aksharantar, the largest publicly available transliteration dataset for Indian languages created by mining from monolingual and parallel corpora, as well as collecting data from human annotators. The dataset contains 26 million transliteration pairs for 21 Indic languages from 3 language families using 12 scripts. Aksharantar is 21 times larger than existing datasets and is the first publicly available dataset for 7 languages and 1 language family. We also introduce a test set of 103k word pairs for 19 languages that enables a fine-grained analysis of transliteration models on native origin words, foreign words, frequent words, and rare words. Using the training set, we trained IndicXlit, a multilingual transliteration model that improves accuracy by 15{\%} on the Dakshina test set, and establishes strong baselines on the Aksharantar testset introduced in this work. The models, mining scripts, transliteration guidelines, and datasets are available at https://github.com/AI4Bharat/IndicXlit under open-source licenses. | [
"Madhani, Yash",
"Parthan, Sushane",
"Bedekar, Priyanka",
"Nc, Gokul",
"Khapra, Ruchi",
"Kunchukuttan, Anoop",
"Kumar, Pratyush",
"Khapra, Mitesh"
] | Aksharantar: Open Indic-language Transliteration datasets and models for the Next Billion Users | findings-emnlp.4 | 2205.03018 | [
"https://github.com/AI4Bharat/IndicXlit"
] | https://huggingface.co/papers/2205.03018 | 0 | 0 | 0 | 8 | [] | [
"ai4bharat/Aksharantar"
] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.5.bib | https://aclanthology.org/2023.findings-emnlp.5/ | @inproceedings{wang-etal-2023-pretraining,
title = "Pretraining Without Attention",
author = "Wang, Junxiong and
Yan, Jing Nathan and
Gu, Albert and
Rush, Alexander",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.5",
doi = "10.18653/v1/2023.findings-emnlp.5",
pages = "58--69",
abstract = "Transformers have been essential to pretraining success in NLP. While other architectures have been used, downstream accuracy is either significantly worse, or requires attention layers to match standard benchmarks such as GLUE. This work explores pretraining without attention by using recent advances in sequence routing based on state-space models (SSMs). Our proposed model, Bidirectional Gated SSM (BiGS), combines SSM layers with a multiplicative gating architecture that has been effective in simplified sequence modeling architectures. The model learns static layers that do not consider pair-wise interactions. Even so, BiGS is able to match BERT pretraining accuracy on GLUE and can be extended to long-form pretraining of 4096 tokens without approximation. Analysis shows that while the models have similar average accuracy, the approach has different inductive biases than BERT and scales more efficiently to longer sequences.",
}
| Transformers have been essential to pretraining success in NLP. While other architectures have been used, downstream accuracy is either significantly worse, or requires attention layers to match standard benchmarks such as GLUE. This work explores pretraining without attention by using recent advances in sequence routing based on state-space models (SSMs). Our proposed model, Bidirectional Gated SSM (BiGS), combines SSM layers with a multiplicative gating architecture that has been effective in simplified sequence modeling architectures. The model learns static layers that do not consider pair-wise interactions. Even so, BiGS is able to match BERT pretraining accuracy on GLUE and can be extended to long-form pretraining of 4096 tokens without approximation. Analysis shows that while the models have similar average accuracy, the approach has different inductive biases than BERT and scales more efficiently to longer sequences. | [
"Wang, Junxiong",
"Yan, Jing Nathan",
"Gu, Albert",
"Rush, Alex",
"er"
] | Pretraining Without Attention | findings-emnlp.5 | 2212.10544 | [
"https://github.com/jxiw/bigs"
] | https://huggingface.co/papers/2212.10544 | 1 | 0 | 0 | 4 | [
"JunxiongWang/BiGS_4096",
"JunxiongWang/BiGS_512",
"JunxiongWang/BiGS_512_MNLI",
"JunxiongWang/BiGS_1024",
"JunxiongWang/BiGS_128",
"JunxiongWang/BiGS_128_MNLI"
] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.6.bib | https://aclanthology.org/2023.findings-emnlp.6/ | @inproceedings{son-oh-2023-time,
title = "Time-Aware Representation Learning for Time-Sensitive Question Answering",
author = "Son, Jungbin and
Oh, Alice",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.6",
doi = "10.18653/v1/2023.findings-emnlp.6",
pages = "70--77",
abstract = "Time is one of the crucial factors in real-world question answering (QA) problems. However, language models have difficulty understanding the relationships between time specifiers, such as {`}after{'} and {`}before{'}, and numbers, since existing QA datasets do not include sufficient time expressions. To address this issue, we propose a Time-Context aware Question Answering (TCQA) framework. We suggest a Time-Context dependent Span Extraction (TCSE) task, and build a time-context dependent data generation framework for model training. Moreover, we present a metric to evaluate the time awareness of the QA model using TCSE. The TCSE task consists of a question and four sentence candidates classified as correct or incorrect based on time and context. The model is trained to extract the answer span from the sentence that is both correct in time and context. The model trained with TCQA outperforms baseline models up to 8.5 of the F1-score in the TimeQA dataset. Our dataset and code are available at https://github.com/sonjbin/TCQA",
}
| Time is one of the crucial factors in real-world question answering (QA) problems. However, language models have difficulty understanding the relationships between time specifiers, such as {`}after{'} and {`}before{'}, and numbers, since existing QA datasets do not include sufficient time expressions. To address this issue, we propose a Time-Context aware Question Answering (TCQA) framework. We suggest a Time-Context dependent Span Extraction (TCSE) task, and build a time-context dependent data generation framework for model training. Moreover, we present a metric to evaluate the time awareness of the QA model using TCSE. The TCSE task consists of a question and four sentence candidates classified as correct or incorrect based on time and context. The model is trained to extract the answer span from the sentence that is both correct in time and context. The model trained with TCQA outperforms baseline models up to 8.5 of the F1-score in the TimeQA dataset. Our dataset and code are available at https://github.com/sonjbin/TCQA | [
"Son, Jungbin",
"Oh, Alice"
] | Time-Aware Representation Learning for Time-Sensitive Question Answering | findings-emnlp.6 | 2310.12585 | [
"https://github.com/sonjbin/tcqa"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.7.bib | https://aclanthology.org/2023.findings-emnlp.7/ | @inproceedings{larionov-etal-2023-effeval,
title = "{E}ff{E}val: A Comprehensive Evaluation of Efficiency for {MT} Evaluation Metrics",
author = {Larionov, Daniil and
Gr{\"u}nwald, Jens and
Leiter, Christoph and
Eger, Steffen},
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.7",
doi = "10.18653/v1/2023.findings-emnlp.7",
pages = "78--96",
abstract = "Efficiency is a key property to foster inclusiveness and reduce environmental costs, especially in an era of LLMs. In this work, we provide a comprehensive evaluation of efficiency for MT evaluation metrics. Our approach involves replacing computation-intensive transformers with lighter alternatives and employing linear and quadratic approximations for alignment algorithms on top of LLM representations. We evaluate six (reference-free and reference-based) metrics across three MT datasets and examine 16 lightweight transformers. In addition, we look into the training efficiency of metrics like COMET by utilizing adapters. Our results indicate that (a) TinyBERT provides the optimal balance between quality and efficiency, (b) CPU speed-ups are more substantial than those on GPU; (c) WMD approximations yield no efficiency gains while reducing quality and (d) adapters enhance training efficiency (regarding backward pass speed and memory requirements) as well as, in some cases, metric quality. These findings can help to strike a balance between evaluation speed and quality, which is essential for effective NLG systems. Furthermore, our research contributes to the ongoing efforts to optimize NLG evaluation metrics with minimal impact on performance. To our knowledge, ours is the most comprehensive analysis of different aspects of efficiency for MT metrics conducted so far.",
}
| Efficiency is a key property to foster inclusiveness and reduce environmental costs, especially in an era of LLMs. In this work, we provide a comprehensive evaluation of efficiency for MT evaluation metrics. Our approach involves replacing computation-intensive transformers with lighter alternatives and employing linear and quadratic approximations for alignment algorithms on top of LLM representations. We evaluate six (reference-free and reference-based) metrics across three MT datasets and examine 16 lightweight transformers. In addition, we look into the training efficiency of metrics like COMET by utilizing adapters. Our results indicate that (a) TinyBERT provides the optimal balance between quality and efficiency, (b) CPU speed-ups are more substantial than those on GPU; (c) WMD approximations yield no efficiency gains while reducing quality and (d) adapters enhance training efficiency (regarding backward pass speed and memory requirements) as well as, in some cases, metric quality. These findings can help to strike a balance between evaluation speed and quality, which is essential for effective NLG systems. Furthermore, our research contributes to the ongoing efforts to optimize NLG evaluation metrics with minimal impact on performance. To our knowledge, ours is the most comprehensive analysis of different aspects of efficiency for MT metrics conducted so far. | [
"Larionov, Daniil",
"Gr{\\\"u}nwald, Jens",
"Leiter, Christoph",
"Eger, Steffen"
] | EffEval: A Comprehensive Evaluation of Efficiency for MT Evaluation Metrics | findings-emnlp.7 | 2209.09593 | [
"https://github.com/nl2g/effeval"
] | https://huggingface.co/papers/2209.09593 | 1 | 1 | 0 | 4 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.8.bib | https://aclanthology.org/2023.findings-emnlp.8/ | @inproceedings{basu-roy-chowdhury-etal-2023-unsupervised-opinion,
title = "Unsupervised Opinion Summarization Using Approximate Geodesics",
author = "Basu Roy Chowdhury, Somnath and
Monath, Nicholas and
Dubey, Kumar and
Ahmed, Amr and
Chaturvedi, Snigdha",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.8",
doi = "10.18653/v1/2023.findings-emnlp.8",
pages = "97--112",
abstract = "Opinion summarization is the task of creating summaries capturing popular opinions from user reviews. In this paper, we introduce Geodesic Summarizer (GeoSumm), a novel system to perform unsupervised extractive opinion summarization. GeoSumm consists of an encoder-decoder based representation learning model that generates topical representations of texts. These representations capture the underlying semantics of the text as a distribution over learnable latent units. GeoSumm generates these topical representations by performing dictionary learning over pre-trained text representations at multiple layers of the decoder. We then use these topical representations to quantify the importance of review sentences using a novel approximate geodesic distance-based scoring mechanism. We use the importance scores to identify popular opinions in order to compose general and aspect-specific summaries. Our proposed model, GeoSumm, achieves strong performance on three opinion summarization datasets. We perform additional experiments to analyze the functioning of our model and showcase the generalization ability of GeoSumm across different domains.",
}
| Opinion summarization is the task of creating summaries capturing popular opinions from user reviews. In this paper, we introduce Geodesic Summarizer (GeoSumm), a novel system to perform unsupervised extractive opinion summarization. GeoSumm consists of an encoder-decoder based representation learning model that generates topical representations of texts. These representations capture the underlying semantics of the text as a distribution over learnable latent units. GeoSumm generates these topical representations by performing dictionary learning over pre-trained text representations at multiple layers of the decoder. We then use these topical representations to quantify the importance of review sentences using a novel approximate geodesic distance-based scoring mechanism. We use the importance scores to identify popular opinions in order to compose general and aspect-specific summaries. Our proposed model, GeoSumm, achieves strong performance on three opinion summarization datasets. We perform additional experiments to analyze the functioning of our model and showcase the generalization ability of GeoSumm across different domains. | [
"Basu Roy Chowdhury, Somnath",
"Monath, Nicholas",
"Dubey, Kumar",
"Ahmed, Amr",
"Chaturvedi, Snigdha"
] | Unsupervised Opinion Summarization Using Approximate Geodesics | findings-emnlp.8 | 2209.07496 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.9.bib | https://aclanthology.org/2023.findings-emnlp.9/ | @inproceedings{valentini-etal-2023-investigating,
title = "Investigating the Frequency Distortion of Word Embeddings and Its Impact on Bias Metrics",
author = "Valentini, Francisco and
Sosa, Juan and
Slezak, Diego and
Altszyler, Edgar",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.9",
doi = "10.18653/v1/2023.findings-emnlp.9",
pages = "113--126",
abstract = "Recent research has shown that static word embeddings can encode words{'} frequencies. However, little has been studied about this behavior. In the present work, we study how frequency and semantic similarity relate to one another in static word embeddings, and we assess the impact of this relationship on embedding-based bias metrics. We find that Skip-gram, GloVe and FastText embeddings tend to produce higher similarity between high-frequency words than between other frequency combinations. We show that the association between frequency and similarity also appears when words are randomly shuffled, and holds for different hyperparameter settings. This proves that the patterns we find are neither due to real semantic associations nor to specific parameters choices, and are an artifact produced by the word embeddings. To illustrate how frequencies can affect the measurement of biases related to gender, ethnicity, and affluence, we carry out a controlled experiment that shows that biases can even change sign or reverse their order when word frequencies change.",
}
| Recent research has shown that static word embeddings can encode words{'} frequencies. However, little has been studied about this behavior. In the present work, we study how frequency and semantic similarity relate to one another in static word embeddings, and we assess the impact of this relationship on embedding-based bias metrics. We find that Skip-gram, GloVe and FastText embeddings tend to produce higher similarity between high-frequency words than between other frequency combinations. We show that the association between frequency and similarity also appears when words are randomly shuffled, and holds for different hyperparameter settings. This proves that the patterns we find are neither due to real semantic associations nor to specific parameters choices, and are an artifact produced by the word embeddings. To illustrate how frequencies can affect the measurement of biases related to gender, ethnicity, and affluence, we carry out a controlled experiment that shows that biases can even change sign or reverse their order when word frequencies change. | [
"Valentini, Francisco",
"Sosa, Juan",
"Slezak, Diego",
"Altszyler, Edgar"
] | Investigating the Frequency Distortion of Word Embeddings and Its Impact on Bias Metrics | findings-emnlp.9 | 2211.08203 | [
"https://github.com/ftvalentini/embeddingsfrequency"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.10.bib | https://aclanthology.org/2023.findings-emnlp.10/ | @inproceedings{balashankar-etal-2023-improving,
title = "Improving Classifier Robustness through Active Generative Counterfactual Data Augmentation",
author = "Balashankar, Ananth and
Wang, Xuezhi and
Qin, Yao and
Packer, Ben and
Thain, Nithum and
Chi, Ed and
Chen, Jilin and
Beutel, Alex",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.10",
doi = "10.18653/v1/2023.findings-emnlp.10",
pages = "127--139",
abstract = "Counterfactual Data Augmentation (CDA) is a commonly used technique for improving robustness in natural language classifiers. However, one fundamental challenge is how to discover meaningful counterfactuals and efficiently label them, with minimal human labeling cost. Most existing methods either completely rely on human-annotated labels, an expensive process which limits the scale of counterfactual data, or implicitly assume label invariance, which may mislead the model with incorrect labels. In this paper, we present a novel framework that utilizes counterfactual generative models to generate a large number of diverse counterfactuals by actively sampling from regions of uncertainty, and then automatically label them with a learned auxiliary classifier. Our key insight is that we can more correctly label the generated counterfactuals by training a pairwise classifier that interpolates the relationship between the original example and the counterfactual. We demonstrate that with a small amount of human-annotated counterfactual data (10{\%}), we can generate a counterfactual augmentation dataset with learned labels, that provides an 18-20{\%} improvement in robustness and a 14-21{\%} reduction in errors on 6 out-of-domain datasets, comparable to that of a fully human-annotated counterfactual dataset for both sentiment classification and question paraphrase tasks.",
}
| Counterfactual Data Augmentation (CDA) is a commonly used technique for improving robustness in natural language classifiers. However, one fundamental challenge is how to discover meaningful counterfactuals and efficiently label them, with minimal human labeling cost. Most existing methods either completely rely on human-annotated labels, an expensive process which limits the scale of counterfactual data, or implicitly assume label invariance, which may mislead the model with incorrect labels. In this paper, we present a novel framework that utilizes counterfactual generative models to generate a large number of diverse counterfactuals by actively sampling from regions of uncertainty, and then automatically label them with a learned auxiliary classifier. Our key insight is that we can more correctly label the generated counterfactuals by training a pairwise classifier that interpolates the relationship between the original example and the counterfactual. We demonstrate that with a small amount of human-annotated counterfactual data (10{\%}), we can generate a counterfactual augmentation dataset with learned labels, that provides an 18-20{\%} improvement in robustness and a 14-21{\%} reduction in errors on 6 out-of-domain datasets, comparable to that of a fully human-annotated counterfactual dataset for both sentiment classification and question paraphrase tasks. | [
"Balashankar, Ananth",
"Wang, Xuezhi",
"Qin, Yao",
"Packer, Ben",
"Thain, Nithum",
"Chi, Ed",
"Chen, Jilin",
"Beutel, Alex"
] | Improving Classifier Robustness through Active Generative Counterfactual Data Augmentation | findings-emnlp.10 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.11.bib | https://aclanthology.org/2023.findings-emnlp.11/ | @inproceedings{hamed-etal-2023-data,
title = "Data Augmentation Techniques for Machine Translation of Code-Switched Texts: A Comparative Study",
author = "Hamed, Injy and
Habash, Nizar and
Vu, Thang",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.11",
doi = "10.18653/v1/2023.findings-emnlp.11",
pages = "140--154",
abstract = "Code-switching (CSW) text generation has been receiving increasing attention as a solution to address data scarcity. In light of this growing interest, we need more comprehensive studies comparing different augmentation approaches. In this work, we compare three popular approaches: lexical replacements, linguistic theories, and back-translation (BT), in the context of Egyptian Arabic-English CSW. We assess the effectiveness of the approaches on machine translation and the quality of augmentations through human evaluation. We show that BT and CSW predictive-based lexical replacement, being trained on CSW parallel data, perform best on both tasks. Linguistic theories and random lexical replacement prove to be effective in the lack of CSW parallel data, where both approaches achieve similar results.",
}
| Code-switching (CSW) text generation has been receiving increasing attention as a solution to address data scarcity. In light of this growing interest, we need more comprehensive studies comparing different augmentation approaches. In this work, we compare three popular approaches: lexical replacements, linguistic theories, and back-translation (BT), in the context of Egyptian Arabic-English CSW. We assess the effectiveness of the approaches on machine translation and the quality of augmentations through human evaluation. We show that BT and CSW predictive-based lexical replacement, being trained on CSW parallel data, perform best on both tasks. Linguistic theories and random lexical replacement prove to be effective in the lack of CSW parallel data, where both approaches achieve similar results. | [
"Hamed, Injy",
"Habash, Nizar",
"Vu, Thang"
] | Data Augmentation Techniques for Machine Translation of Code-Switched Texts: A Comparative Study | findings-emnlp.11 | 2310.15262 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.12.bib | https://aclanthology.org/2023.findings-emnlp.12/ | @inproceedings{chen-etal-2023-relation,
title = "On the Relation between Sensitivity and Accuracy in In-Context Learning",
author = "Chen, Yanda and
Zhao, Chen and
Yu, Zhou and
McKeown, Kathleen and
He, He",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.12",
doi = "10.18653/v1/2023.findings-emnlp.12",
pages = "155--167",
abstract = "In-context learning (ICL) suffers from oversensitivity to the prompt, making it unreliable in real-world scenarios. We study the sensitivity of ICL with respect to multiple perturbation types. First, we find that label bias obscures the true sensitivity, and therefore prior work may have significantly underestimated ICL sensitivity. Second, we observe a strong negative correlation between ICL sensitivity and accuracy: predictions sensitive to perturbations are less likely to be correct. Motivated by these findings, we propose $SenSel$, a few-shot selective prediction method that abstains from sensitive predictions. Experiments on ten classification datasets show that $SenSel$ consistently outperforms two commonly used confidence-based and entropy-based baselines on abstention decisions.",
}
| In-context learning (ICL) suffers from oversensitivity to the prompt, making it unreliable in real-world scenarios. We study the sensitivity of ICL with respect to multiple perturbation types. First, we find that label bias obscures the true sensitivity, and therefore prior work may have significantly underestimated ICL sensitivity. Second, we observe a strong negative correlation between ICL sensitivity and accuracy: predictions sensitive to perturbations are less likely to be correct. Motivated by these findings, we propose $SenSel$, a few-shot selective prediction method that abstains from sensitive predictions. Experiments on ten classification datasets show that $SenSel$ consistently outperforms two commonly used confidence-based and entropy-based baselines on abstention decisions. | [
"Chen, Y",
"a",
"Zhao, Chen",
"Yu, Zhou",
"McKeown, Kathleen",
"He, He"
] | On the Relation between Sensitivity and Accuracy in In-Context Learning | findings-emnlp.12 | 2209.07661 | [
"https://github.com/yandachen/iclsensitivity"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.13.bib | https://aclanthology.org/2023.findings-emnlp.13/ | @inproceedings{lin-etal-2023-self,
title = "Self-distilled Transitive Instance Weighting for Denoised Distantly Supervised Relation Extraction",
author = "Lin, Xiangyu and
Jia, Weijia and
Gong, Zhiguo",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.13",
doi = "10.18653/v1/2023.findings-emnlp.13",
pages = "168--180",
abstract = "The widespread existence of wrongly labeled instances is a challenge to distantly supervised relation extraction. Most of the previous works are trained in a bag-level setting to alleviate such noise. However, sentence-level training better utilizes the information than bag-level training, as long as combined with effective noise alleviation. In this work, we propose a novel Transitive Instance Weighting mechanism integrated with the self-distilled BERT backbone, utilizing information in the intermediate outputs to generate dynamic instance weights for denoised sentence-level training. By down-weighting wrongly labeled instances and discounting the weights of easy-to-fit ones, our method can effectively tackle wrongly labeled instances and prevent overfitting. Experiments on both held-out and manual datasets indicate that our method achieves state-of-the-art performance and consistent improvements over the baselines.",
}
| The widespread existence of wrongly labeled instances is a challenge to distantly supervised relation extraction. Most of the previous works are trained in a bag-level setting to alleviate such noise. However, sentence-level training better utilizes the information than bag-level training, as long as combined with effective noise alleviation. In this work, we propose a novel Transitive Instance Weighting mechanism integrated with the self-distilled BERT backbone, utilizing information in the intermediate outputs to generate dynamic instance weights for denoised sentence-level training. By down-weighting wrongly labeled instances and discounting the weights of easy-to-fit ones, our method can effectively tackle wrongly labeled instances and prevent overfitting. Experiments on both held-out and manual datasets indicate that our method achieves state-of-the-art performance and consistent improvements over the baselines. | [
"Lin, Xiangyu",
"Jia, Weijia",
"Gong, Zhiguo"
] | Self-distilled Transitive Instance Weighting for Denoised Distantly Supervised Relation Extraction | findings-emnlp.13 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.14.bib | https://aclanthology.org/2023.findings-emnlp.14/ | @inproceedings{tanner-hoffman-2023-mwe,
title = "{MWE} as {WSD}: Solving Multiword Expression Identification with Word Sense Disambiguation",
author = "Tanner, Joshua and
Hoffman, Jacob",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.14",
doi = "10.18653/v1/2023.findings-emnlp.14",
pages = "181--193",
abstract = "Recent approaches to word sense disambiguation (WSD) utilize encodings of the sense gloss (definition), in addition to the input context, to improve performance. In this work we demonstrate that this approach can be adapted for use in multiword expression (MWE) identification by training models which use gloss and context information to filter MWE candidates produced by a rule-based extraction pipeline. Our approach substantially improves precision, outperforming the state-of-the-art in MWE identification on the DiMSUM dataset by up to 1.9 F1 points and achieving competitive results on the PARSEME 1.1 English dataset. Our models also retain most of their WSD performance, showing that a single model can be used for both tasks. Finally, building on similar approaches using Bi-encoders for WSD, we introduce a novel Poly-encoder architecture which improves MWE identification performance.",
}
| Recent approaches to word sense disambiguation (WSD) utilize encodings of the sense gloss (definition), in addition to the input context, to improve performance. In this work we demonstrate that this approach can be adapted for use in multiword expression (MWE) identification by training models which use gloss and context information to filter MWE candidates produced by a rule-based extraction pipeline. Our approach substantially improves precision, outperforming the state-of-the-art in MWE identification on the DiMSUM dataset by up to 1.9 F1 points and achieving competitive results on the PARSEME 1.1 English dataset. Our models also retain most of their WSD performance, showing that a single model can be used for both tasks. Finally, building on similar approaches using Bi-encoders for WSD, we introduce a novel Poly-encoder architecture which improves MWE identification performance. | [
"Tanner, Joshua",
"Hoffman, Jacob"
] | MWE as WSD: Solving Multiword Expression Identification with Word Sense Disambiguation | findings-emnlp.14 | 2303.06623 | [
"https://github.com/mindful/mweaswsd"
] | https://huggingface.co/papers/2303.06623 | 0 | 0 | 0 | 2 | [] | [] | [] | 1 | Poster |
https://aclanthology.org/2023.findings-emnlp.15.bib | https://aclanthology.org/2023.findings-emnlp.15/ | @inproceedings{wang-etal-2023-dual,
title = "Dual Contrastive Learning Framework for Incremental Text Classification",
author = "Wang, Yigong and
Wang, Zhuoyi and
Lin, Yu and
Guo, Jinghui and
Halim, Sadaf and
Khan, Latifur",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.15",
doi = "10.18653/v1/2023.findings-emnlp.15",
pages = "194--206",
abstract = "Incremental learning plays a pivotal role in the context of online knowledge discovery, as it encourages large models (LM) to learn and refresh knowledge continuously. Many approaches have been proposed to simultaneously preserve knowledge from previous tasks while learning new concepts in online NLP applications. In this paper, we primarily focus on learning a more generalized embedding space that could be better transferred to various downstream sequence tasks. The key idea is to learn from both task-agnostic and task-specific embedding aspects so that the inherent challenge of catastrophic forgetting that arises in incremental learning scenarios can be addressed with a more generalized solution. We propose a dual contrastive learning (DCL) based framework to foster the transferability of representations across different tasks, it consists of two key components: firstly, we utilize global contrastive learning that intertwines a task-agnostic strategy for promoting a generalized embedding space; secondly, considering the domain shift from unseen distributions can compromise the quality of learned embeddings. We further incorporate a task-specific attention mechanism to enhance the adaptability of task-specific weight for various emerging tasks and ultimately reduce errors in generic representations. Experiments over various text datasets demonstrate that our work achieves superior performance and outperforms the current state-of-the-art methods.",
}
| Incremental learning plays a pivotal role in the context of online knowledge discovery, as it encourages large models (LM) to learn and refresh knowledge continuously. Many approaches have been proposed to simultaneously preserve knowledge from previous tasks while learning new concepts in online NLP applications. In this paper, we primarily focus on learning a more generalized embedding space that could be better transferred to various downstream sequence tasks. The key idea is to learn from both task-agnostic and task-specific embedding aspects so that the inherent challenge of catastrophic forgetting that arises in incremental learning scenarios can be addressed with a more generalized solution. We propose a dual contrastive learning (DCL) based framework to foster the transferability of representations across different tasks, it consists of two key components: firstly, we utilize global contrastive learning that intertwines a task-agnostic strategy for promoting a generalized embedding space; secondly, considering the domain shift from unseen distributions can compromise the quality of learned embeddings. We further incorporate a task-specific attention mechanism to enhance the adaptability of task-specific weight for various emerging tasks and ultimately reduce errors in generic representations. Experiments over various text datasets demonstrate that our work achieves superior performance and outperforms the current state-of-the-art methods. | [
"Wang, Yigong",
"Wang, Zhuoyi",
"Lin, Yu",
"Guo, Jinghui",
"Halim, Sadaf",
"Khan, Latifur"
] | Dual Contrastive Learning Framework for Incremental Text Classification | findings-emnlp.15 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
||
https://aclanthology.org/2023.findings-emnlp.16.bib | https://aclanthology.org/2023.findings-emnlp.16/ | @inproceedings{gain-etal-2023-reference,
title = "Reference Free Domain Adaptation for Translation of Noisy Questions with Question Specific Rewards",
author = "Gain, Baban and
Appicharla, Ramakrishna and
Chennabasavaraj, Soumya and
Garera, Nikesh and
Ekbal, Asif and
Chelliah, Muthusamy",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.16",
doi = "10.18653/v1/2023.findings-emnlp.16",
pages = "207--221",
abstract = "Community Question-Answering (CQA) portals serve as a valuable tool for helping users within an organization. However, making them accessible to non-English-speaking users continues to be a challenge. Translating questions can broaden the community{'}s reach, benefiting individuals with similar inquiries in various languages. Translating questions using Neural Machine Translation (NMT) poses more challenges, especially in noisy environments, where the grammatical correctness of the questions is not monitored. These questions may be phrased as statements by non-native speakers, with incorrect subject-verb order and sometimes even missing question marks. Creating a synthetic parallel corpus from such data is also difficult due to its noisy nature. To address this issue, we propose a training methodology that fine-tunes the NMT system only using source-side data. Our approach balances adequacy and fluency by utilizing a loss function that combines BERTScore and Masked Language Model (MLM) Score. Our method surpasses the conventional Maximum Likelihood Estimation (MLE) based fine-tuning approach, which relies on synthetic target data, by achieving a 1.9 BLEU score improvement. Our model exhibits robustness while we add noise to our baseline, and still achieve 1.1 BLEU improvement and large improvements on TER and BLEURT metrics. Our proposed methodology is model-agnostic and is only necessary during the training phase. We make the codes and datasets publicly available at \url{https://www.iitp.ac.in/~ai-nlp-ml/resources.html#DomainAdapt} for facilitating further research.",
}
| Community Question-Answering (CQA) portals serve as a valuable tool for helping users within an organization. However, making them accessible to non-English-speaking users continues to be a challenge. Translating questions can broaden the community{'}s reach, benefiting individuals with similar inquiries in various languages. Translating questions using Neural Machine Translation (NMT) poses more challenges, especially in noisy environments, where the grammatical correctness of the questions is not monitored. These questions may be phrased as statements by non-native speakers, with incorrect subject-verb order and sometimes even missing question marks. Creating a synthetic parallel corpus from such data is also difficult due to its noisy nature. To address this issue, we propose a training methodology that fine-tunes the NMT system only using source-side data. Our approach balances adequacy and fluency by utilizing a loss function that combines BERTScore and Masked Language Model (MLM) Score. Our method surpasses the conventional Maximum Likelihood Estimation (MLE) based fine-tuning approach, which relies on synthetic target data, by achieving a 1.9 BLEU score improvement. Our model exhibits robustness while we add noise to our baseline, and still achieve 1.1 BLEU improvement and large improvements on TER and BLEURT metrics. Our proposed methodology is model-agnostic and is only necessary during the training phase. We make the codes and datasets publicly available at \url{https://www.iitp.ac.in/~ai-nlp-ml/resources.html#DomainAdapt} for facilitating further research. | [
"Gain, Baban",
"Appicharla, Ramakrishna",
"Chennabasavaraj, Soumya",
"Garera, Nikesh",
"Ekbal, Asif",
"Chelliah, Muthusamy"
] | Reference Free Domain Adaptation for Translation of Noisy Questions with Question Specific Rewards | findings-emnlp.16 | 2310.15259 | [
"https://github.com/babangain/unsup_questions_translation"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.17.bib | https://aclanthology.org/2023.findings-emnlp.17/ | @inproceedings{zaratiana-etal-2023-filtered,
title = "Filtered Semi-{M}arkov {CRF}",
author = "Zaratiana, Urchade and
Tomeh, Nadi and
El Khbir, Niama and
Holat, Pierre and
Charnois, Thierry",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.17",
doi = "10.18653/v1/2023.findings-emnlp.17",
pages = "222--235",
abstract = "Semi-Markov CRF has been proposed as an alternative to the traditional Linear Chain CRF for text segmentation tasks such as Named Entity Recognition (NER). Unlike CRF, which treats text segmentation as token-level prediction, Semi-CRF considers segments as the basic unit, making it more expressive. However, Semi-CRF suffers from two major drawbacks: (1) quadratic complexity over sequence length, as it operates on every span of the input sequence, and (2) inferior performance compared to CRF for sequence labeling tasks like NER. In this paper, we introduce Filtered Semi-Markov CRF, a variant of Semi-CRF that addresses these issues by incorporating a filtering step to eliminate irrelevant segments, reducing complexity and search space. Our approach is evaluated on several NER benchmarks, where it outperforms both CRF and Semi-CRF while being significantly faster. The implementation of our method is available on Github.",
}
| Semi-Markov CRF has been proposed as an alternative to the traditional Linear Chain CRF for text segmentation tasks such as Named Entity Recognition (NER). Unlike CRF, which treats text segmentation as token-level prediction, Semi-CRF considers segments as the basic unit, making it more expressive. However, Semi-CRF suffers from two major drawbacks: (1) quadratic complexity over sequence length, as it operates on every span of the input sequence, and (2) inferior performance compared to CRF for sequence labeling tasks like NER. In this paper, we introduce Filtered Semi-Markov CRF, a variant of Semi-CRF that addresses these issues by incorporating a filtering step to eliminate irrelevant segments, reducing complexity and search space. Our approach is evaluated on several NER benchmarks, where it outperforms both CRF and Semi-CRF while being significantly faster. The implementation of our method is available on Github. | [
"Zaratiana, Urchade",
"Tomeh, Nadi",
"El Khbir, Niama",
"Holat, Pierre",
"Charnois, Thierry"
] | Filtered Semi-Markov CRF | findings-emnlp.17 | 2311.18028 | [
"https://github.com/urchade/filtered-semi-markov-crf"
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
|
https://aclanthology.org/2023.findings-emnlp.18.bib | https://aclanthology.org/2023.findings-emnlp.18/ | @inproceedings{azeemi-etal-2023-data,
title = "Data Pruning for Efficient Model Pruning in Neural Machine Translation",
author = "Azeemi, Abdul and
Qazi, Ihsan and
Raza, Agha",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Findings of the Association for Computational Linguistics: EMNLP 2023",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.findings-emnlp.18",
doi = "10.18653/v1/2023.findings-emnlp.18",
pages = "236--246",
abstract = "Model pruning methods reduce memory requirements and inference time of large-scale pre-trained language models after deployment. However, the actual pruning procedure is computationally intensive, involving repeated training and pruning until the required sparsity is achieved. This paper combines data pruning with movement pruning for Neural Machine Translation (NMT) to enable efficient fine-pruning. We design a dataset pruning strategy by leveraging cross-entropy scores of individual training instances. We conduct pruning experiments on the task of machine translation from Romanian-to-English and Turkish-to-English, and demonstrate that selecting hard-to-learn examples (top-k) based on training cross-entropy scores outperforms other dataset pruning methods. We empirically demonstrate that data pruning reduces the overall steps required for convergence and the training time of movement pruning. Finally, we perform a series of experiments to tease apart the role of training data during movement pruning and uncover new insights to understand the interplay between data and model pruning in the context of NMT.",
}
| Model pruning methods reduce memory requirements and inference time of large-scale pre-trained language models after deployment. However, the actual pruning procedure is computationally intensive, involving repeated training and pruning until the required sparsity is achieved. This paper combines data pruning with movement pruning for Neural Machine Translation (NMT) to enable efficient fine-pruning. We design a dataset pruning strategy by leveraging cross-entropy scores of individual training instances. We conduct pruning experiments on the task of machine translation from Romanian-to-English and Turkish-to-English, and demonstrate that selecting hard-to-learn examples (top-k) based on training cross-entropy scores outperforms other dataset pruning methods. We empirically demonstrate that data pruning reduces the overall steps required for convergence and the training time of movement pruning. Finally, we perform a series of experiments to tease apart the role of training data during movement pruning and uncover new insights to understand the interplay between data and model pruning in the context of NMT. | [
"Azeemi, Abdul",
"Qazi, Ihsan",
"Raza, Agha"
] | Data Pruning for Efficient Model Pruning in Neural Machine Translation | findings-emnlp.18 | [
""
] | -1 | -1 | -1 | -1 | [] | [] | [] | 0 | Poster |
Subsets and Splits