title
stringlengths
5
246
categories
stringlengths
5
94
abstract
stringlengths
54
5.03k
authors
stringlengths
0
6.72k
doi
stringlengths
12
54
id
stringlengths
6
10
year
float64
2.02k
2.02k
venue
stringclasses
13 values
Fallacious Argument Classification in Political Debates
null
Fallacies play a prominent role in argumentation since antiquity due to their contribution to argumentation in critical thinking education. Their role is even more crucial nowadays as contemporary argumentation technologies face challenging tasks as misleading and manipulative information detection in news articles and political discourse, and counter-narrative generation. Despite some work in this direction, the issue of classifying arguments as being fallacious largely remains a challenging and an unsolved task. Our contribution is twofold: first, we present a novel annotated resource of 31 political debates from the U.S. Presidential Campaigns, where we annotated six main categories of fallacious arguments (i.e., ad hominem, appeal to authority, appeal to emotion, false cause, slogan, slippery slope) leading to 1628 annotated fallacious arguments; second, we tackle this novel task of fallacious argument classification and we define a neural architecture based on transformers outperforming state-of-the-art results and standard baselines. Our results show the important role played by argument components and relations in this task.
Pierpaolo Goffredo, Shohreh Haddadan, Vorakit Vorakitphan, Elena Cabrio, Serena Villata
null
null
2,022
ijcai
Logically Consistent Adversarial Attacks for Soft Theorem Provers
null
Recent efforts within the AI community have yielded impressive results towards “soft theorem proving” over natural language sentences using language models. We propose a novel, generative adversarial framework for probing and improving these models’ reasoning capabilities. Adversarial attacks in this domain suffer from the logical inconsistency problem, whereby perturbations to the input may alter the label. Our Logically consistent AdVersarial Attacker, LAVA, addresses this by combining a structured generative process with a symbolic solver, guaranteeing logical consistency. Our framework successfully generates adversarial attacks and identifies global weaknesses common across multiple target models. Our analyses reveal naive heuristics and vulnerabilities in these models’ reasoning capabilities, exposing an incomplete grasp of logical deduction under logic programs. Finally, in addition to effective probing of these models, we show that training on the generated samples improves the target model’s performance.
Alexander Gaskell, Yishu Miao, Francesca Toni, Lucia Specia
null
null
2,022
ijcai
Interactive Information Extraction by Semantic Information Graph
null
Information extraction (IE) mainly focuses on three highly correlated subtasks, i.e., entity extraction, relation extraction and event extraction. Recently, there are studies using Abstract Meaning Representation (AMR) to utilize the intrinsic correlations among these three subtasks. AMR based models are capable of building the relationship of arguments. However, they are hard to deal with relations. In addition, the noises of AMR (i.e., tags unrelated to IE tasks, nodes with unconcerned conception, and edge types with complicated hierarchical structures) disturb the decoding processing of IE. As a result, the decoding processing limited by the AMR cannot be worked effectively. To overcome the shortages, we propose an Interactive Information Extraction (InterIE) model based on a novel Semantic Information Graph (SIG). SIG can guide our InterIE model to tackle the three subtasks jointly. Furthermore, the well-designed SIG without noise is capable of enriching entity and event trigger representation, and capturing the edge connection between the information types. Experimental results show that our InterIE achieves state-of-the-art performance on all IE subtasks on the benchmark dataset (i.e., ACE05-E+ and ACE05-E). More importantly, the proposed model is not sensitive to the decoding order, which goes beyond the limitations of AMR based methods.
Siqi Fan, Yequan Wang, Jing Li, Zheng Zhang, Shuo Shang, Peng Han
null
null
2,022
ijcai
FastDiff: A Fast Conditional Diffusion Model for High-Quality Speech Synthesis
null
Denoising diffusion probabilistic models (DDPMs) have recently achieved leading performances in many generative tasks. However, the inherited iterative sampling process costs hindered their applications to speech synthesis. This paper proposes FastDiff, a fast conditional diffusion model for high-quality speech synthesis. FastDiff employs a stack of time-aware location-variable convolutions of diverse receptive field patterns to efficiently model long-term time dependencies with adaptive conditions. A noise schedule predictor is also adopted to reduce the sampling steps without sacrificing the generation quality. Based on FastDiff, we design an end-to-end text-to-speech synthesizer, FastDiff-TTS, which generates high-fidelity speech waveforms without any intermediate feature (e.g., Mel-spectrogram). Our evaluation of FastDiff demonstrates the state-of-the-art results with higher-quality (MOS 4.28) speech samples. Also, FastDiff enables a sampling speed of 58x faster than real-time on a V100 GPU, making diffusion models practically applicable to speech synthesis deployment for the first time. We further show that FastDiff generalized well to the mel-spectrogram inversion of unseen speakers, and FastDiff-TTS outperformed other competing methods in end-to-end text-to-speech synthesis. Audio samples are available at https://FastDiff.github.io/.
Rongjie Huang, Max W. Y. Lam, Jun Wang, Dan Su, Dong Yu, Yi Ren, Zhou Zhao
null
null
2,022
ijcai
Conversational Semantic Role Labeling with Predicate-Oriented Latent Graph
null
Conversational semantic role labeling (CSRL) is a newly proposed task that uncovers the shallow semantic structures in a dialogue text. Unfortunately several important characteristics of the CSRL task have been overlooked by the existing works, such as the structural information integration, near-neighbor influence. In this work, we investigate the integration of a latent graph for CSRL. We propose to automatically induce a predicate-oriented latent graph (POLar) with a predicate-centered gaussian mechanism, by which the nearer and informative words to the predicate will be allocated with more attention. The POLar structure is then dynamically pruned and refined so as to best fit the task need. We additionally introduce an effective dialogue-level pre-trained language model, CoDiaBERT, for better supporting multiple utterance sentences and handling the speaker coreference issue in CSRL. Our system outperforms best-performing baselines on three benchmark CSRL datasets with big margins, especially achieving over 4% F1 score improvements on the cross-utterance argument detection. Further analyses are presented to better understand the effectiveness of our proposed methods.
Hao Fei, Shengqiong Wu, Meishan Zhang, Yafeng Ren, Donghong Ji
null
null
2,022
ijcai
Interpretable AMR-Based Question Decomposition for Multi-hop Question Answering
null
Effective multi-hop question answering (QA) requires reasoning over multiple scattered paragraphs and providing explanations for answers. Most existing approaches cannot provide an interpretable reasoning process to illustrate how these models arrive at an answer. In this paper, we propose a Question Decomposition method based on Abstract Meaning Representation (QDAMR) for multi-hop QA, which achieves interpretable reasoning by decomposing a multi-hop question into simpler subquestions and answering them in order. Since annotating the decomposition is expensive, we first delegate the complexity of understanding the multi-hop question to an AMR parser. We then achieve decomposition of a multi-hop question via segmentation of the corresponding AMR graph based on the required reasoning type. Finally, we generate sub-questions using an AMR-to-Text generation model and answer them with an off-the-shelf QA model. Experimental results on HotpotQA demonstrate that our approach is competitive for interpretable reasoning and that the sub-questions generated by QDAMR are well-formed, outperforming existing question-decomposition-based multihop QA approaches.
Zhenyun Deng, Yonghua Zhu, Yang Chen, Michael Witbrock, Patricia Riddle
null
null
2,022
ijcai
MuiDial: Improving Dialogue Disentanglement with Intent-Based Mutual Learning
null
The main goal of dialogue disentanglement is to separate the mixed utterances from a chat slice into independent dialogues. Existing models often utilize either an utterance-to-utterance (U2U) prediction to determine whether two utterances that have the “reply-to” relationship belong to one dialogue, or an utterance-to-thread (U2T) prediction to determine which dialogue-thread a given utterance should belong to. Inspired by mutual leaning, we propose MuiDial, a novel dialogue disentanglement model, to exploit the intent of each utterance and feed the intent to a mutual learning U2U-U2T disentanglement model. Experimental results and in-depth analysis on several benchmark datasets demonstrate the effectiveness and generalizability of our approach.
Ziyou Jiang, Lin Shi, Celia Chen, Fangwen Mu, Yumin Zhang, Qing Wang
null
null
2,022
ijcai
Improving Few-Shot Text-to-SQL with Meta Self-Training via Column Specificity
null
The few-shot problem is an urgent challenge for single-table text-to-SQL. Existing methods ignore the potential value of unlabeled data, and merely rely on a coarse-grained Meta-Learning (ML) algorithm that neglects the differences of column contributions to the optimization object. This paper proposes a Meta Self-Training text-to-SQL (MST-SQL) method to solve the problem. Specifically, MST-SQL is based on column-wise HydraNet and adopts self-training as an effective mechanism to learn from readily available unlabeled samples. During each epoch of training, it first predicts pseudo-labels for unlabeled samples and then leverages them to update the parameters. A fine-grained ML algorithm is used in updating, which weighs the contribution of columns by their specificity, in order to further improve the generalizability. Extensive experimental results on both open-domain and domain-specific benchmarks reveal that our MST-SQL has significant advantages in few-shot scenarios, and is also competitive in standard supervised settings.
Xinnan Guo, Yongrui Chen, Guilin Qi, Tianxing Wu, Hao Xu
null
null
2,022
ijcai
Taylor, Can You Hear Me Now? A Taylor-Unfolding Framework for Monaural Speech Enhancement
null
While the deep learning techniques promote the rapid development of the speech enhancement (SE) community, most schemes only pursue the performance in a black-box manner and lack adequate model interpretability. Inspired by Taylor's approximation theory, we propose an interpretable decoupling-style SE framework, which disentangles the complex spectrum recovery into two separate optimization problems i.e., magnitude and complex residual estimation. Specifically, serving as the 0th-order term in Taylor's series, a filter network is delicately devised to suppress the noise component only in the magnitude domain and obtain a coarse spectrum. To refine the phase distribution, we estimate the sparse complex residual, which is defined as the difference between target and coarse spectra, and measures the phase gap. In this study, we formulate the residual component as the combination of various high-order Taylor terms and propose a lightweight trainable module to replace the complicated derivative operator between adjacent terms. Finally, following Taylor's formula, we can reconstruct the target spectrum by the superimposition between 0th-order and high-order terms. Experimental results on two benchmark datasets show that our framework achieves state-of-the-art performance over previous competing baselines in various evaluation metrics. The source code is available at https://github.com/Andong-Li-speech/TaylorSENet.
Andong Li, Shan You, Guochen Yu, Chengshi Zheng, Xiaodong Li
null
null
2,022
ijcai
“My nose is running.” “Are you also coughing?”: Building A Medical Diagnosis Agent with Interpretable Inquiry Logics
null
With the rise of telemedicine, the task of developing Dialogue Systems for Medical Diagnosis (DSMD) has received much attention in recent years. Different from early researches that needed to rely on extra human resources and expertise to build the system, recent researches focused on how to build DSMD in a data-driven manner. However, the previous data-driven DSMD methods largely overlooked the system interpretability, which is critical for a medical application, and they also suffered from the data sparsity issue at the same time. In this paper, we explore how to bring interpretability to data-driven DSMD. Specifically, we propose a more interpretable decision process to implement the dialogue manager of DSMD by reasonably mimicking real doctors' inquiry logics, and we devise a model with highly transparent components to conduct the inference. Moreover, we collect a new DSMD dataset, which has a much larger scale, more diverse patterns, and is of higher quality than the existing ones. The experiments show that our method obtains 7.7%, 10.0%, 3.0% absolute improvement in diagnosis accuracy respectively on three datasets, demonstrating the effectiveness of its rational decision process and model design. Our codes and the GMD-12 dataset are available at https://github.com/lwgkzl/BR-Agent.
Wenge Liu, Yi Cheng, Hao Wang, Jianheng Tang, Yafei Liu, Ruihui Zhao, Wenjie Li, Yefeng Zheng, Xiaodan Liang
null
null
2,022
ijcai
Domain-Adaptive Text Classification with Structured Knowledge from Unlabeled Data
null
Domain adaptive text classification is a challenging problem for the large-scale pretrained language models because they often require expensive additional labeled data to adapt to new domains. Existing works usually fails to leverage the implicit relationships among words across domains. In this paper, we propose a novel method, called Domain Adaptation with Structured Knowledge (DASK), to enhance domain adaptation by exploiting word-level semantic relationships. DASK first builds a knowledge graph to capture the relationship between pivot terms (domain-independent words) and non-pivot terms in the target domain. Then during training, DASK injects pivot-related knowledge graph information into source domain texts. For the downstream task, these knowledge-injected texts are fed into a BERT variant capable of processing knowledge-injected textual data. Thanks to the knowledge injection, our model learns domain-invariant features for non-pivots according to their relationships with pivots. DASK ensures the pivots to have domain-invariant behaviors by dynamically inferring via the polarity scores of candidate pivots during training with pseudo-labels. We validate DASK on a wide range of cross-domain sentiment classification tasks and observe up to 2.9% absolute performance improvement over baselines for 20 different domain pairs. Code is available at https://github.com/hikaru-nara/DASK.
Tian Li, Xiang Chen, Zhen Dong, Kurt Keutzer, Shanghang Zhang
null
null
2,022
ijcai
Graph-based Dynamic Word Embeddings
null
As time goes by, language evolves with word semantics changing. Unfortunately, traditional word embedding methods neglect the evolution of language and assume that word representations are static. Although contextualized word embedding models can capture the diverse representations of polysemous words, they ignore temporal information as well. To tackle the aforementioned challenges, we propose a graph-based dynamic word embedding (GDWE) model, which focuses on capturing the semantic drift of words continually. We introduce word-level knowledge graphs (WKGs) to store short-term and long-term knowledge. WKGs can provide rich structural information as supplement of lexical information, which help enhance the word embedding quality and capture semantic drift quickly. Theoretical analysis and extensive experiments validate the effectiveness of our GDWE on dynamic word embedding learning.
Yuyin Lu, Xin Cheng, Ziran Liang, Yanghui Rao
null
null
2,022
ijcai
Deexaggeration
null
We introduce a new task in hyperbole processing, deexaggeration, which concerns the recovery of the meaning of what is being exaggerated in a hyperbolic sentence in the form of a structured representation. In this paper, we lay the groundwork for the computational study of understanding hyperbole by (1) defining a structured representation to encode what is being exaggerated in a hyperbole in a non-hyperbolic manner, (2) annotating the hyperbolic sentences in two existing datasets, HYPO and HYPO-cn, using this structured representation, (3) conducting an empirical analysis of our annotated corpora, and (4) presenting preliminary results on the deexaggeration task.
Li Kong, Chuanyi Li, Vincent Ng
null
null
2,022
ijcai
Lyra: A Benchmark for Turducken-Style Code Generation
null
Recently, neural techniques have been used to generate source code automatically. While promising for declarative languages, these approaches achieve much poorer performance on datasets for imperative languages. Since a declarative language is typically embedded in an imperative language (i.e., the turducken-style programming) in real-world software development, the promising results on declarative languages can hardly lead to significant reduction of manual software development efforts. In this paper, we define a new code generation task: given a natural language comment, this task aims to generate a program in a base imperative language with an embedded declarative language. To our knowledge, this is the first turducken-style code generation task. For this task, we present Lyra: a dataset in Python with embedded SQL. This dataset contains 2,000 carefully annotated database manipulation programs from real usage projects. Each program is paired with both a Chinese comment and an English comment. In our experiment, we adopted Transformer, BERT-style, and GPT-style models as baselines. In the best setting, GPT-style model can achieve 24% and 25.5% AST exact matching accuracy using Chinese and English comments, respectively. Therefore, we believe that Lyra provides a new challenge for code generation. Yet, overcoming this challenge may significantly boost the applicability of code generation techniques for real-world software development.
Qingyuan Liang, Zeyu Sun, Qihao Zhu, Wenjie Zhang, Lian Yu, Yingfei Xiong, Lu Zhang
null
null
2,022
ijcai
Explicit Alignment Learning for Neural Machine Translation
null
Even though neural machine translation (NMT) has become the state-of-the-art solution for end-to-end translation, it still suffers from a lack of translation interpretability, which may be conveniently enhanced by explicit alignment learning (EAL), as performed in traditional statistical machine translation (SMT). To provide the benefits of both NMT and SMT, this paper presents a novel model design that enhances NMT with an additional training process for EAL, in addition to the end-to-end translation training. Thus, we propose two approaches an explicit alignment learning approach, in which we further remove the need for the additional alignment model, and perform embedding mixup with the alignment based on encoder--decoder attention weights in the NMT model. We conducted experiments on both small-scale (IWSLT14 De->En and IWSLT13 Fr->En) and large-scale (WMT14 En->De, En->Fr, WMT17 Zh->En) benchmarks. Evaluation results show that our EAL methods significantly outperformed strong baseline methods, which shows the effectiveness of EAL. Further explorations show that the translation improvements are due to a better spatial alignment of the source and target language embeddings. Our method improves translation performance without the need to increase model parameters and training data, which verifies that the idea of incorporating techniques of SMT into NMT is worthwhile.
Zuchao Li, Hai Zhao, Fengshun Xiao, Masao Utiyama, Eiichiro Sumita
null
null
2,022
ijcai
Parameter-Efficient Sparsity for Large Language Models Fine-Tuning
null
With the dramatically increased number of parameters in language models, sparsity methods have received ever-increasing research focus to compress and accelerate the models. While most research focuses on how to accurately retain appropriate weights while maintaining the performance of the compressed model, there are challenges in the computational overhead and memory footprint of sparse training when compressing large-scale language models. To address this problem, we propose a Parameter-efficient Sparse Training (PST) method to reduce the number of trainable parameters during sparse-aware training in downstream tasks. Specifically, we first combine the data-free and data-driven criteria to efficiently and accurately measure the importance of weights. Then we investigate the intrinsic redundancy of data-driven weight importance and derive two obvious characteristics i.e. low-rankness and structuredness. Based on that, two groups of small matrices are introduced to compute the data-driven importance of weights, instead of using the original large importance score matrix, which therefore makes the sparse training resource-efficient and parameter-efficient. Experiments with diverse networks (i.e. BERT, RoBERTa and GPT-2) on dozens of datasets demonstrate PST performs on par or better than previous sparsity methods, despite only training a small number of parameters. For instance, compared with previous sparsity methods, our PST only requires 1.5% trainable parameters to achieve comparable performance on BERT.
Yuchao Li, Fuli Luo, Chuanqi Tan, Mengdi Wang, Songfang Huang, Shen Li, Junjie Bai
null
null
2,022
ijcai
Generating a Structured Summary of Numerous Academic Papers: Dataset and Method
null
Writing a survey paper on one research topic usually needs to cover the salient content from numerous related papers, which can be modeled as a multi-document summarization (MDS) task. Existing MDS datasets usually focus on producing the structureless summary covering a few input documents. Meanwhile, previous structured summary generation works focus on summarizing a single document into a multi-section summary. These existing datasets and methods cannot meet the requirements of summarizing numerous academic papers into a structured summary. To deal with the scarcity of available data, we propose BigSurvey, the first large-scale dataset for generating comprehensive summaries of numerous academic papers on each topic. We collect target summaries from more than seven thousand survey papers and utilize their 430 thousand reference papers’ abstracts as input documents. To organize the diverse content from dozens of input documents and ensure the efficiency of processing long text sequences, we propose a summarization method named category-based alignment and sparse transformer (CAST). The experimental results show that our CAST method outperforms various advanced summarization methods.
Shuaiqi LIU, Jiannong Cao, Ruosong Yang, Zhiyuan Wen
null
null
2,022
ijcai
FastRE: Towards Fast Relation Extraction with Convolutional Encoder and Improved Cascade Binary Tagging Framework
null
Recent work for extracting relations from texts has achieved excellent performance. However, most existing methods pay less attention to the efficiency, making it still challenging to quickly extract relations from massive or streaming text data in realistic scenarios. The main efficiency bottleneck is that these methods use a Transformer-based pre-trained language model for encoding, which heavily affects the training speed and inference speed. To address this issue, we propose a fast relation extraction model (FastRE) based on convolutional encoder and improved cascade binary tagging framework. Compared to previous work, FastRE employs several innovations to improve efficiency while also keeping promising performance. Concretely, FastRE adopts a novel convolutional encoder architecture combined with dilated convolution, gated unit and residual connection, which significantly reduces the computation cost of training and inference, while maintaining the satisfactory performance. Moreover, to improve the cascade binary tagging framework, FastRE first introduces a type-relation mapping mechanism to accelerate tagging efficiency and alleviate relation redundancy, and then utilizes a position-dependent adaptive thresholding strategy to obtain higher tagging accuracy and better model generalization. Experimental results demonstrate that FastRE is well balanced between efficiency and performance, and achieves 3-10$\times$ training speed, 7-15$\times$ inference speed faster, and 1/100 parameters compared to the state-of-the-art models, while the performance is still competitive. Our code is available at \url{https://github.com/seukgcode/FastRE}.
Guozheng Li, Xu Chen, Peng Wang, Jiafeng Xie, Qiqing Luo
null
null
2,022
ijcai
Abstract Rule Learning for Paraphrase Generation
null
In early years, paraphrase generation typically adopts rule-based methods, which are interpretable and able to make global transformations to the original sentence. But they struggle to produce fluent paraphrases. Recently, deep neural networks have shown impressive performances in generating paraphrases. However, the current neural models are black boxes and are prone to make local modifications to the inputs. In this work, we combine these two approaches into RULER, a novel approach that performs abstract rule learning for paraphrasing. The key idea is to explicitly learn generalizable rules that could enhance the paraphrase generation process of neural networks. In RULER, we first propose a rule generalizability metric to guide the model to generate rules underlying the paraphrasing. Then, we leverage neural networks to generate paraphrases by refining the sentences transformed by the learned rules. Extensive experimental results demonstrate the superiority of RULER over previous state-of-the-art methods in terms of paraphrase quality, generalization ability and interpretability.
Xianggen Liu, Wenqiang Lei, Jiancheng Lv, Jizhe Zhou
null
null
2,022
ijcai
Neutral Utterances are Also Causes: Enhancing Conversational Causal Emotion Entailment with Social Commonsense Knowledge
null
Conversational Causal Emotion Entailment aims to detect causal utterances for a non-neutral targeted utterance from a conversation. In this work, we build conversations as graphs to overcome implicit contextual modelling of the original entailment style. Following the previous work, we further introduce the emotion information into graphs. Emotion information can markedly promote the detection of causal utterances whose emotion is the same as the targeted utterance. However, it is still hard to detect causal utterances with different emotions, especially neutral ones. The reason is that models are limited in reasoning causal clues and passing them between utterances. To alleviate this problem, we introduce social commonsense knowledge (CSK) and propose a Knowledge Enhanced Conversation graph (KEC). KEC propagates the CSK between two utterances. As not all CSK is emotionally suitable for utterances, we therefore propose a sentiment-realized knowledge selecting strategy to filter CSK. To process KEC, we further construct the Knowledge Enhanced Directed Acyclic Graph networks. Experimental results show that our method outperforms baselines and infers more causes with different emotions from the targeted utterance.
Jiangnan Li, Fandong Meng, Zheng Lin, Rui Liu, Peng Fu, Yanan Cao, Weiping Wang, Jie Zhou
null
null
2,022
ijcai
Searching for Optimal Subword Tokenization in Cross-domain NER
null
Input distribution shift is one of the vital problems in unsupervised domain adaptation (UDA). The most popular UDA approaches focus on domain-invariant representation learning, trying to align the features from different domains into a similar feature distribution. However, these approaches ignore the direct alignment of input word distributions between domains, which is a vital factor in word-level classification tasks such as cross-domain NER. In this work, we shed new light on cross-domain NER by introducing a subword-level solution, X-Piece, for input word-level distribution shift in NER. Specifically, we re-tokenize the input words of the source domain to approach the target subword distribution, which is formulated and solved as an optimal transport problem. As this approach focuses on the input level, it can also be combined with previous DIRL methods for further improvement. Experimental results show the effectiveness of the proposed method based on BERT-tagger on four benchmark NER datasets. Also, the proposed method is proved to benefit DIRL methods such as DANN.
Ruotian Ma, Yiding Tan, Xin Zhou, Xuanting Chen, Di Liang, Sirui Wang, Wei Wu, Tao Gui
null
null
2,022
ijcai
CUP: Curriculum Learning based Prompt Tuning for Implicit Event Argument Extraction
null
Implicit event argument extraction (EAE) aims to identify arguments that could scatter over the document. Most previous work focuses on learning the direct relations between arguments and the given trigger, while the implicit relations with long-range dependency are not well studied. Moreover, recent neural network based approaches rely on a large amount of labeled data for training, which is unavailable due to the high labelling cost. In this paper, we propose a Curriculum learning based Prompt tuning (CUP) approach, which resolves implicit EAE by four learning stages. The stages are defined according to the relations with the trigger node in a semantic graph, which well captures the long-range dependency between arguments and the trigger. In addition, we integrate a prompt-based encoder-decoder model to elicit related knowledge from pre-trained language models (PLMs) in each stage, where the prompt templates are adapted with the learning progress to enhance the reasoning for arguments. Experimental results on two well-known benchmark datasets show the great advantages of our proposed approach. In particular, we outperform the state-of-the-art models in both fully-supervised and low-data scenarios.
Jiaju Lin, Qin Chen, Jie Zhou, Jian Jin, Liang He
null
null
2,022
ijcai
Enhancing Text Generation via Multi-Level Knowledge Aware Reasoning
null
How to generate high-quality textual content is a non-trivial task. Existing methods generally generate text by grounding on word-level knowledge. However, word-level knowledge cannot express multi-word text units, hence existing methods may generate low-quality and unreasonable text. In this paper, we leverage event-level knowledge to enhance text generation. However, event knowledge is very sparse. To solve this problem, we split a coarse-grained event into fine-grained word components to obtain the word-level knowledge among event components. The word-level knowledge models the interaction among event components, which makes it possible to reduce the sparsity of events. Based on the event-level and the word-level knowledge, we devise a multi-level knowledge aware reasoning framework. Specifically, we first utilize event knowledge to make event-based content planning, i.e., select reasonable event sketches conditioned by the input text. Then, we combine the selected event sketches with the word-level knowledge for text generation. We validate our method on two widely used datasets, experimental results demonstrate the effectiveness of our framework to text generation.
Feiteng Mu, Wenjie Li
null
null
2,022
ijcai
Prompting to Distill: Boosting Data-Free Knowledge Distillation via Reinforced Prompt
null
Data-free knowledge distillation (DFKD) conducts knowledge distillation via eliminating the dependence of original training data, and has recently achieved impressive results in accelerating pre-trained language models. At the heart of DFKD is to reconstruct a synthetic dataset by inverting the parameters of the uncompressed model. Prior DFKD approaches, however, have largely relied on hand-crafted priors of the target data distribution for the reconstruction, which can be inevitably biased and often incompetent to capture the intrinsic distributions. To address this problem, we propose a prompt-based method, termed as PromptDFD, that allows us to take advantage of learned language priors, which effectively harmonizes the synthetic sentences to be semantically and grammatically correct. Specifically, PromptDFD leverages a pre-trained generative model to provide language priors and introduces a reinforced topic prompter to control data synthesis, making the generated samples thematically relevant and semantically plausible, and thus friendly to downstream tasks. As shown in our experiments, the proposed method substantially improves the synthesis quality and achieves considerable improvements on distillation performance. In some cases, PromptDFD even gives rise to results on par with those from the data-driven knowledge distillation with access to the original training data.
Xinyin Ma, Xinchao Wang, Gongfan Fang, Yongliang Shen, Weiming Lu
null
null
2,022
ijcai
AdMix: A Mixed Sample Data Augmentation Method for Neural Machine Translation
null
In Neural Machine Translation (NMT), data augmentation methods such as back-translation have proven their effectiveness in improving translation performance. In this paper, we propose a novel data augmentation approach for NMT, which is independent of any additional training data. Our approach, AdMix, consists of two parts: 1) introduce faint discrete noise (word replacement, word dropping, word swapping) into the original sentence pairs to form augmented samples; 2) generate new synthetic training data by softly mixing the augmented samples with their original samples in training corpus. Experiments on three translation datasets of different scales show that AdMix achieves significant improvements (1.0 to 2.7 BLEU points) over strong Transformer baseline. When combined with other data augmentation techniques (e.g., back-translation), our approach can obtain further improvements.
Chang Jin, Shigui Qiu, Nini Xiao, Hao Jia
null
null
2,022
ijcai
Learning by Interpreting
null
This paper introduces a novel way of enhancing NLP prediction accuracy by incorporating model interpretation insights. Conventional efforts often focus on balancing the trade-offs between accuracy and interpretability, for instance, sacrificing model performance to increase the explainability. Here, we take a unique approach and show that model interpretation can ultimately help improve NLP quality. Specifically, we employ our learned interpretability results using attention mechanisms, LIME, and SHAP to train our model. We demonstrate a significant increase in accuracy of up to +3.4 BLEU points on NMT and up to +4.8 points on GLUE tasks, verifying our hypothesis that it is possible to achieve better model learning by incorporating model interpretation knowledge.
Xuting Tang, Abdul Rafae Khan, Shusen Wang, Jia Xu
null
null
2,022
ijcai
Low-Resource NER by Data Augmentation With Prompting
null
Named entity recognition (NER) is a fundamental information extraction task that seeks to identify entity mentions of certain types in text. Despite numerous advances, the existing NER methods rely on extensive supervision for model training, which struggle in a low-resource scenario with limited training data. In this paper, we propose a new data augmentation method for low-resource NER, by eliciting knowledge from BERT with prompting strategies. Particularly, we devise a label-conditioned word replacement strategy that can produce more label-consistent examples by capturing the underlying word-label dependencies, and a prompting with question answering method to generate new training data from unlabeled texts. The experimental results have widely confirmed the effectiveness of our approach. Particularly, in a low-resource scenario with only 150 training sentences, our approach outperforms previous methods without data augmentation by over 40% in F1 and prior best data augmentation methods by over 2.0% in F1. Furthermore, our approach also fits with a zero-shot scenario, yielding promising results without using any human-labeled data for the task.
Jian Liu, Yufeng Chen, Jinan Xu
null
null
2,022
ijcai
Curriculum-Based Self-Training Makes Better Few-Shot Learners for Data-to-Text Generation
null
Despite the success of text-to-text pre-trained models in various natural language generation (NLG) tasks, the generation performance is largely restricted by the number of labeled data in downstream tasks, particularly in data-to-text generation tasks. Existing works mostly utilize abundant unlabeled structured data to conduct unsupervised pre-training for task adaption, which fail to model the complex relationship between source structured data and target texts. Thus, we introduce self-training as a better few-shot learner than task-adaptive pre-training, which explicitly captures this relationship via pseudo-labeled data generated by the pre-trained model. To alleviate the side-effect of low-quality pseudo-labeled data during self-training, we propose a novel method called Curriculum-Based Self-Training (CBST) to effectively leverage unlabeled data in a rearranged order determined by the difficulty of text generation. Experimental results show that our method can outperform fine-tuning and task-adaptive pre-training methods, and achieve state-of-the-art performance in the few-shot setting of data-to-text generation.
Pei Ke, Haozhe Ji, Zhenyu Yang, Yi Huang, Junlan Feng, Xiaoyan Zhu, Minlie Huang
null
null
2,022
ijcai
On Tracking Dialogue State by Inheriting Slot Values in Mentioned Slot Pools
null
Dialogue state tracking (DST) is a component of the task oriented dialogue system. It is responsible for extracting and managing slots, where each slot represents a part of the information to accomplish a task, and slot value is updated recurrently in each dialogue turn. However, many DST models cannot update slot values appropriately. These models may repeatedly inherit wrong slot values extracted in previous turns, resulting in the fail of the entire DST task. They cannot update indirectly mentioned slots well, either. This study designed a model with a mentioned slot pool (MSP) to tackle the update problem. The MSP is a slot specific memory that records all mentioned slot values that may be inherited, and our model updates slot values according to the MSP and the dialogue context. Our model rejects inheriting the previous slot value when it predicates the value is wrong. Then, it extracts the slot value from the current dialogue context. As the contextual information accumulates, the new value is more likely to be correct. It also can track the indirectly mentioned slot by picking a value from the MSP. Experimental results showed our model reached state of the art DST performance on MultiWOZ datasets.
Zhoujian Sun, Zhengxing Huang, Nai Ding
null
null
2,022
ijcai
Document-level Event Factuality Identification via Reinforced Multi-Granularity Hierarchical Attention Networks
null
Document-level Event Factuality Identification (DEFI) predicts the event factuality according to the current document, and mainly depends on event-related tokens and sentences. However, previous studies relied on annotated information and did not filter irrelevant and noisy texts. Therefore, this paper proposes a novel end-to-end model, i.e., Reinforced Multi-Granularity Hierarchical Attention Network (RMHAN), which can learn information at different levels of granularity from tokens and sentences hierarchically. Moreover, with hierarchical reinforcement learning, RMHAN first selects relevant and meaningful tokens, and then selects useful sentences for document-level encoding. Experimental results on DLEF-v2 corpus show that RMHAN model outperforms several state-of-the-art baselines and achieves the best performance.
Zhong Qian, Peifeng Li, Qiaoming Zhu, Guodong Zhou
null
null
2,022
ijcai
Control Globally, Understand Locally: A Global-to-Local Hierarchical Graph Network for Emotional Support Conversation
null
Emotional support conversation aims at reducing the emotional distress of the help-seeker, which is a new and challenging task. It requires the system to explore the cause of help-seeker's emotional distress and understand their psychological intention to provide supportive responses. However, existing methods mainly focus on the sequential contextual information, ignoring the hierarchical relationships with the global cause and local psychological intention behind conversations, thus leads to a weak ability of emotional support. In this paper, we propose a Global-to-Local Hierarchical Graph Network to capture the multi-source information (global cause, local intentions and dialog history) and model hierarchical relationships between them, which consists of a multi-source encoder, a hierarchical graph reasoner, and a global-guide decoder. Furthermore, a novel training objective is designed to monitor semantic information of the global cause. Experimental results on the emotional support conversation dataset, ESConv, confirm that the proposed GLHG has achieved the state-of-the-art performance on the automatic and human evaluations.
Wei Peng, Yue Hu, Luxi Xing, Yuqiang Xie, Yajing Sun, Yunpeng Li
null
null
2,022
ijcai
Training Naturalized Semantic Parsers with Very Little Data
null
Semantic parsing is an important NLP problem, particularly for voice assistants such as Alexa and Google Assistant. State-of-the-art (SOTA) semantic parsers are seq2seq architectures based on large language models that have been pretrained on vast amounts of text. To better leverage that pretraining, recent work has explored a reformulation of semantic parsing whereby the output sequences are themselves natural language sentences, but in a controlled fragment of natural language. This approach delivers strong results, particularly for few-shot semantic parsing, which is of key importance in practice and the focus of our paper. We push this line of work forward by introducing an automated methodology that delivers very significant additional improvements by utilizing modest amounts of unannotated data, which is typically easy to obtain. Our method is based on a novel synthesis of four techniques: joint training with auxiliary unsupervised tasks; constrained decoding; self-training; and paraphrasing. We show that this method delivers new SOTA few-shot performance on the Overnight dataset, particularly in very low-resource settings, and very compelling few-shot results on a new semantic parsing dataset.
Subendhu Rongali, Konstantine Arkoudas, Melanie Rubino, Wael Hamza
null
null
2,022
ijcai
BiFSMN: Binary Neural Network for Keyword Spotting
null
The deep neural networks, such as the Deep-FSMN, have been widely studied for keyword spotting (KWS) applications. However, computational resources for these networks are significantly constrained since they usually run on-call on edge devices. In this paper, we present BiFSMN, an accurate and extreme-efficient binary neural network for KWS. We first construct a High-frequency Enhancement Distillation scheme for the binarization-aware training, which emphasizes the high-frequency information from the full-precision network's representation that is more crucial for the optimization of the binarized network. Then, to allow the instant and adaptive accuracy-efficiency trade-offs at runtime, we also propose a Thinnable Binarization Architecture to further liberate the acceleration potential of the binarized network from the topology perspective. Moreover, we implement a Fast Bitwise Computation Kernel for BiFSMN on ARMv8 devices which fully utilizes registers and increases instruction throughput to push the limit of deployment efficiency. Extensive experiments show that BiFSMN outperforms existing binarization methods by convincing margins on various datasets and is even comparable with the full-precision counterpart (e.g., less than 3% drop on Speech Commands V1-12). We highlight that benefiting from the thinnable architecture and the optimized 1-bit implementation, BiFSMN can achieve an impressive 22.3x speedup and 15.5x storage-saving on real-world edge hardware.
Haotong Qin, Xudong Ma, Yifu Ding, Xiaoyang Li, Yang Zhang, Yao Tian, Zejun Ma, Jie Luo, Xianglong Liu
null
null
2,022
ijcai
Document-level Relation Extraction via Subgraph Reasoning
null
Document-level relation extraction aims to extract relations between entities in a document. In contrast to sentence-level relation extraction, it deals with longer texts and more complex entity interactions, which requires reasoning over multiple sentences with rich reasoning skills. Most current researches construct a document-level graph first, and then focus on the overall graph structure or the paths between the target entity pair in the graph. In this paper, we propose a novel subgraph reasoning (SGR) framework for document-level relation extraction. SGR combines the advantages of both graph-based models and path-based models, integrating various paths between the target entity pair into a much simpler subgraph structure to perform relational reasoning. Moreover, the paths generated by our designed heuristic strategy explicitly model the requisite reasoning skills and roughly cover the supporting sentences for each relation instance. Experimental results on DocRED show that SGR outperforms existing models, and further analyses demonstrate that our method is both effective and explainable. Our code is available at https://github.com/Crysta1ovo/SGR.
Xingyu Peng, Chong Zhang, Ke Xu
null
null
2,022
ijcai
A Unified Strategy for Multilingual Grammatical Error Correction with Pre-trained Cross-Lingual Language Model
null
Synthetic data construction of Grammatical Error Correction (GEC) for non-English languages relies heavily on human-designed and language-specific rules, which produce limited error-corrected patterns. In this paper, we propose a generic and language-independent strategy for multilingual GEC, which can train a GEC system effectively for a new non-English language with only two easy-to-access resources: 1) a pre-trained cross-lingual language model (PXLM) and 2) parallel translation data between English and the language. Our approach creates diverse parallel GEC data without any language-specific operations by taking the non-autoregressive translation generated by PXLM and the gold translation as error-corrected sentence pairs. Then, we reuse PXLM to initialize the GEC model and pre-train it with the synthetic data generated by itself, which yields further improvement. We evaluate our approach on three public benchmarks of GEC in different languages. It achieves the state-of-the-art results on the NLPCC 2018 Task 2 dataset (Chinese) and obtains competitive performance on Falko-Merlin (German) and RULEC-GEC (Russian). Further analysis demonstrates that our data construction method is complementary to rule-based approaches.
Xin Sun, Tao Ge, Shuming Ma, Jingjing Li, Furu Wei, Houfeng Wang
null
null
2,022
ijcai
TaxoPrompt: A Prompt-based Generation Method with Taxonomic Context for Self-Supervised Taxonomy Expansion
null
Taxonomies are hierarchical classifications widely exploited to facilitate downstream natural language processing tasks. The taxonomy expansion task aims to incorporate emergent concepts into the existing taxonomies. Prior works focus on modeling the local substructure of taxonomies but neglect the global structure. In this paper, we propose TaxoPrompt, a framework that learns the global structure by prompt tuning with taxonomic context. Prompt tuning leverages a template to formulate downstream tasks into masked language model form for better distributed semantic knowledge use. To further infuse global structure knowledge into language models, we enhance the prompt template by exploiting the taxonomic context constructed by a variant of the random walk algorithm. Experiments on seven public benchmarks show that our proposed TaxoPrompt is effective and efficient in automatically expanding taxonomies and achieves state-of-the-art performance.
Hongyuan Xu, Yunong Chen, Zichen Liu, Yanlong Wen, Xiaojie Yuan
null
null
2,022
ijcai
Propose-and-Refine: A Two-Stage Set Prediction Network for Nested Named Entity Recognition
null
Nested named entity recognition (nested NER) is a fundamental task in natural language processing. Various span-based methods have been proposed to detect nested entities with span representations. However, span-based methods do not consider the relationship between a span and other entities or phrases, which is helpful in the NER task. Besides, span-based methods have trouble predicting long entities due to limited span enumeration length. To mitigate these issues, we present the Propose-and-Refine Network (PnRNet), a two-stage set prediction network for nested NER. In the propose stage, we use a span-based predictor to generate some coarse entity predictions as entity proposals. In the refine stage, proposals interact with each other, and richer contextual information is incorporated into the proposal representations. The refined proposal representations are used to re-predict entity boundaries and classes. In this way, errors in coarse proposals can be eliminated, and the boundary prediction is no longer constrained by the span enumeration length limitation. Additionally, we build multi-scale sentence representations, which better model the hierarchical structure of sentences and provide richer contextual information than token-level representations. Experiments show that PnRNet achieves state-of-the-art performance on four nested NER datasets and one flat NER dataset.
Shuhui Wu, Yongliang Shen, Zeqi Tan, Weiming Lu
null
null
2,022
ijcai
Robust Fine-tuning via Perturbation and Interpolation from In-batch Instances
null
Fine-tuning pretrained language models (PLMs) on downstream tasks has become common practice in natural language processing. However, most of the PLMs are vulnerable, e.g., they are brittle under adversarial attacks or imbalanced data, which hinders the application of the PLMs on some downstream tasks, especially in safe-critical scenarios. In this paper, we propose a simple yet effective fine-tuning method called Match-Tuning to force the PLMs to be more robust. For each instance in a batch, we involve other instances in the same batch to interact with it. To be specific, regarding the instances with other labels as a perturbation, Match-Tuning makes the model more robust to noise at the beginning of training. While nearing the end, Match-Tuning focuses more on performing an interpolation among the instances with the same label for better generalization. Extensive experiments on various tasks in GLUE benchmark show that Match-Tuning consistently outperforms the vanilla fine-tuning by 1.64 scores. Moreover, Match-Tuning exhibits remarkable robustness to adversarial attacks and data imbalance.
Shoujie Tong, Qingxiu Dong, Damai Dai, Yifan Song, Tianyu Liu, Baobao Chang, Zhifang Sui
null
null
2,022
ijcai
Towards Discourse-Aware Document-Level Neural Machine Translation
null
Current document-level neural machine translation (NMT) systems have achieved remarkable progress with document context. Nevertheless, discourse information that has been proven effective in many NLP tasks is ignored in most previous work. In this work, we aim at incorporating the coherence information hidden within the RST-style discourse structure into machine translation. To achieve it, we propose a document-level NMT system enhanced with the discourse-aware document context, which is named Disco2NMT. Specifically, Disco2NMT models document context based on the discourse dependency structures through a hierarchical architecture. We first convert the RST tree of an article into a dependency structure and then build the graph convolutional network (GCN) upon the segmented EDUs under the guidance of RST dependencies to capture the discourse-aware context for NMT incorporation. We conduct experiments on the document-level English-German and English-Chinese translation tasks with three domains (TED, News, and Europarl). Experimental results show that our Disco2NMT model significantly surpasses both context-agnostic and context-aware baseline systems on multiple evaluation indicators.
Xin Tan, Longyin Zhang, Fang Kong, Guodong Zhou
null
null
2,022
ijcai
Automatic Noisy Label Correction for Fine-Grained Entity Typing
null
Fine-grained entity typing (FET) aims to assign proper semantic types to entity mentions according to their context, which is a fundamental task in various entity-leveraging applications. Current FET systems usually establish on large-scale weaklysupervised/distantly annotation data, which may contain abundant noise and thus severely hinder the performance of the FET task. Although previous studies have made great success in automatically identifying the noisy labels in FET, they usually rely on some auxiliary resources which may be unavailable in real-world applications (e.g., pre-defined hierarchical type structures, humanannotated subsets). In this paper, we propose a novel approach to automatically correct noisy labels for FET without external resources. Specifically, it first identifies the potentially noisy labels by estimating the posterior probability of a label being positive or negative according to the logits output by the model, and then relabel candidate noisy labels by training a robust model over the remaining clean labels. Experiments on two popular benchmarks prove the effectiveness of our method. Our source code can be obtained from https://github.com/CCIIPLab/DenoiseFET.
Weiran Pan, Wei Wei, Feida Zhu
null
null
2,022
ijcai
Unsupervised Context Aware Sentence Representation Pretraining for Multi-lingual Dense Retrieval
null
Recent research demonstrates the effectiveness of using pretrained language models (PLM) to improve dense retrieval and multilingual dense retrieval. In this work, we present a simple but effective monolingual pretraining task called contrastive context prediction (CCP) to learn sentence representation by modeling sentence level contextual relation. By pushing the embedding of sentences in a local context closer and pushing random negative samples away, different languages could form isomorphic structure, then sentence pairs in two different languages will be automatically aligned. Our experiments show that model collapse and information leakage are very easy to happen during contrastive training of language model, but language-specific memory bank and asymmetric batch normalization operation play an essential role in preventing collapsing and information leakage, respectively. Besides, a post-processing for sentence embedding is also very effective to achieve better retrieval performance. On the multilingual sentence retrieval task Tatoeba, our model achieves new SOTA results among methods without using bilingual data. Our model also shows larger gain on Tatoeba when transferring between non-English pairs. On two multi-lingual query-passage retrieval tasks, XOR Retrieve and Mr.TYDI, our model even achieves two SOTA results in both zero-shot and supervised setting among all pretraining models using bilingual data.
Ning Wu, Yaobo Liang, Houxing Ren, Linjun Shou, Nan Duan, Ming Gong, Daxin Jiang
null
null
2,022
ijcai
UM4: Unified Multilingual Multiple Teacher-Student Model for Zero-Resource Neural Machine Translation
null
Most translation tasks among languages belong to the zero-resource translation problem where parallel corpora are unavailable. Multilingual neural machine translation (MNMT) enables one-pass translation using shared semantic space for all languages compared to the two-pass pivot translation but often underperforms the pivot-based method. In this paper, we propose a novel method, named as Unified Multilingual Multiple teacher-student Model for NMT (UM4). Our method unifies source-teacher, target-teacher, and pivot-teacher models to guide the student model for the zero-resource translation. The source teacher and target teacher force the student to learn the direct source-target translation by the distilled knowledge on both source and target sides. The monolingual corpus is further leveraged by the pivot-teacher model to enhance the student model. Experimental results demonstrate that our model of 72 directions significantly outperforms previous methods on the WMT benchmark.
Jian Yang, Yuwei Yin, Shuming Ma, Dongdong Zhang, Shuangzhi Wu, Hongcheng Guo, Zhoujun Li, Furu Wei
null
null
2,022
ijcai
Diversity Features Enhanced Prototypical Network for Few-shot Intent Detection
null
Few-shot Intent Detection (FSID) is a challenging task in dialogue systems due to the scarcity of available annotated utterances. Although existing few-shot learning approaches have made remarkable progress, they fall short in adapting to the Generalized Few-shot Intent Detection (GFSID) task where both seen and unseen classes are present. A core problem of the simultaneous existence of these two tasks is that limited training samples fail to cover the diversity of user expressions. In this paper, we propose an effective Diversity Features Enhanced Prototypical Network (DFEPN) to enhance diversity features for novel intents by fully exploiting the diversity of known intent samples. Specially, DFEPN generates diversity features of samples in the hidden space via a diversity feature generator module and then fuses these features with original support vectors to get a more suitable prototype vector of each class. To evaluate the effectiveness of our model on both FSID and GFSID tasks, we carry out sufficient experiments on two benchmark intent detection datasets. Results demonstrate that our proposed model outperforms existing state-of-the-art methods and keeps stable performance on both two tasks.
Fengyi Yang, Xi Zhou, Yi Wang, Abibulla Atawulla, Ran Bi
null
null
2,022
ijcai
Neural Subgraph Explorer: Reducing Noisy Information via Target-oriented Syntax Graph Pruning
null
Recent years have witnessed the emerging success of leveraging syntax graphs for the target sentiment classification task. However, we discover that existing syntax-based models suffer from two issues: noisy information aggregation and loss of distant correlations. In this paper, we propose a novel model termed Neural Subgraph Explorer, which (1) reduces the noisy information via pruning target-irrelevant nodes on the syntax graph; (2) introduces beneficial first-order connections between the target and its related words into the obtained graph. Specifically, we design a multi-hop actions score estimator to evaluate the value of each word regarding the specific target. The discrete action sequence is sampled through Gumble-Softmax and then used for both of the syntax graph and the self-attention graph. To introduce the first-order connections between the target and its relevant words, the two pruned graphs are merged. Finally, graph convolution is conducted on the obtained unified graph to update the hidden states. And this process is stacked with multiple layers. To our knowledge, this is the first attempt of target-oriented syntax graph pruning in this task. Experimental results demonstrate the superiority of our model, which achieves new state-of-the-art performance.
Bowen Xing, Ivor Tsang
null
null
2,022
ijcai
MGAD: Learning Descriptional Representation Distilled from Distributional Semantics for Unseen Entities
null
Entity representation plays a central role in building effective entity retrieval models. Recent works propose to learn entity representations based on entity-centric contexts, which achieve SOTA performances on many tasks. However, these methods lead to poor representations for unseen entities since they rely on a multitude of occurrences for each entity to enable accurate representations. To address this issue, we propose to learn enhanced descriptional representations for unseen entities by distilling knowledge from distributional semantics into descriptional embeddings. Specifically, we infer enhanced embeddings for unseen entities based on descriptions by aligning the descriptional embedding space to the distributional embedding space with different granularities, i.e., element-level, batch-level and space-level alignment. Experimental results on four benchmark datasets show that our approach improves the performance over all baseline methods. In particular, our approach can achieve the effectiveness of the teacher model on almost all entities, and maintain such high performance on unseen entities.
Yuanzheng Wang, Xueqi Cheng, Yixing Fan, Xiaofei Zhu, Huasheng Liang, Qiang Yan, Jiafeng Guo
null
null
2,022
ijcai
Robust Interpretable Text Classification against Spurious Correlations Using AND-rules with Negation
null
The state-of-the-art natural language processing models have raised the bar for excellent performance on a variety of tasks in recent years. However, concerns are rising over their primitive sensitivity to distribution biases that reside in the training and testing data. This issue hugely impacts the performance of the models when exposed to out-of-distribution and counterfactual data. The root cause seems to be that many machine learning models are prone to learn the shortcuts, modelling simple correlations rather than more fundamental and general relationships. As a result, such text classifiers tend to perform poorly when a human makes minor modifications to the data, which raises questions regarding their robustness. In this paper, we employ a rule-based architecture called Tsetlin Machine (TM) that learns both simple and complex correlations by ANDing features and their negations. As such, it generates explainable AND-rules using negated and non-negated reasoning. Here, we explore how non-negated reasoning can be more prone to distribution biases than negated reasoning. We further leverage this finding by adapting the TM architecture to mainly perform negated reasoning using the specificity parameter s. As a result, the AND-rules becomes robust to spurious correlations and can also correctly predict counterfactual data. Our empirical investigation of the model's robustness uses the specificity s to control the degree of negated reasoning. Experiments on publicly available Counterfactually-Augmented Data demonstrate that the negated clauses are robust to spurious correlations and outperform Naive Bayes, SVM, and Bi-LSTM by up to 20 %, and ELMo by almost 6 % on counterfactual test data.
Rohan Kumar Yadav, Lei Jiao, Ole-Christoffer Granmo, Morten Goodwin
null
null
2,022
ijcai
Relational Triple Extraction: One Step is Enough
null
Extracting relational triples from unstructured text is an essential task in natural language processing and knowledge graph construction. Existing approaches usually contain two fundamental steps: (1) finding the boundary positions of head and tail entities; (2) concatenating specific tokens to form triples. However, nearly all previous methods suffer from the problem of error accumulation, i.e., the boundary recognition error of each entity in step (1) will be accumulated into the final combined triples. To solve the problem, in this paper, we introduce a fresh perspective to revisit the triple extraction task and propose a simple but effective model, named DirectRel. Specifically, the proposed model first generates candidate entities through enumerating token sequences in a sentence, and then transforms the triple extraction task into a linking problem on a ``head -> tail" bipartite graph. By doing so, all triples can be directly extracted in only one step. Extensive experimental results on two widely used datasets demonstrate that the proposed model performs better than the state-of-the-art baselines.
Yu-Ming Shang, Heyan Huang, Xin Sun, Wei Wei, Xian-Ling Mao
null
null
2,022
ijcai
Stage-wise Stylistic Headline Generation: Style Generation and Summarized Content Insertion
null
A quality headline with a high click-rate should not only summarize the content of an article, but also reflect a style that attracts users. Such demand has drawn rising attention to the task of stylistic headline generation (SHG). An intuitive method is to first generate plain headlines leveraged by document-headline parallel data then transfer them to a target style. However, this inevitably suffers from error propagation. Therefore, to unify the two sub-tasks and explicitly decompose style-relevant attributes and summarize content, we propose an end-to-end stage-wise SHG model containing the style generation component and the content insertion component, where the former generates stylistic-relevant intermediate outputs and the latter receives these outputs then inserts the summarized content. The intermediate outputs are observable, making the style generation easy to control. Our system is comprehensively evaluated by both quantitative and qualitative metrics, and it achieves state-of-the-art results in SHG over three different stylistic datasets.
Jiaao Zhan, Yang Gao, Yu Bai, Qianhui Liu
null
null
2,022
ijcai
Efficient Document-level Event Extraction via Pseudo-Trigger-aware Pruned Complete Graph
null
Most previous studies of document-level event extraction mainly focus on building argument chains in an autoregressive way, which achieves a certain success but is inefficient in both training and inference. In contrast to the previous studies, we propose a fast and lightweight model named as PTPCG. In our model, we design a novel strategy for event argument combination together with a non-autoregressive decoding algorithm via pruned complete graphs, which are constructed under the guidance of the automatically selected pseudo triggers. Compared to the previous systems, our system achieves competitive results with 19.8% of parameters and much lower resource consumption, taking only 3.8% GPU hours for training and up to 8.5 times faster for inference. Besides, our model shows superior compatibility for the datasets with (or without) triggers and the pseudo triggers can be the supplements for annotated triggers to make further improvements. Codes are available at https://github.com/Spico197/DocEE .
Tong Zhu, Xiaoye Qu, Wenliang Chen, Zhefeng Wang, Baoxing Huai, Nicholas Yuan, Min Zhang
null
null
2,022
ijcai
Charge Prediction by Constitutive Elements Matching of Crimes
null
Charge prediction is to automatically predict the judgemental charges for legal cases. To convict a person/unit of a charge, the case description must contain matching instances of the constitutive elements (CEs) of that charge. This knowledge of CEs is a valuable guide for the judge in making final decisions. However, it is far from fully exploited for charge prediction in the literature. In this paper we propose a novel method named Constitutive Elements-guided Charge Prediction (CECP). CECP mimics human's charge identification process to extract potential instances of CEs and generate predictions accordingly. It avoids laborious labeling of matching instances of CEs by a novel reinforcement learning module which progressively selects potentially matching sentences for CEs and evaluates their relevance. The final prediction is generated based on the selected sentences and their relevant CEs. Experiments on two real-world datasets show the superiority of CECP over competitive baselines.
Jie Zhao, Ziyu Guan, Cai Xu, Wei Zhao, Enze Chen
null
null
2,022
ijcai
Reasoning over Hybrid Chain for Table-and-Text Open Domain Question Answering
null
Tabular and textual question answering requires systems to perform reasoning over heterogeneous information, considering table structure, and the connections among table and text. In this paper, we propose a ChAin-centric Reasoning and Pre-training framework (CARP). CARP utilizes hybrid chain to model the explicit intermediate reasoning process across table and text for question answering. We also propose a novel chain-centric pre-training method, to enhance the pre-trained model in identifying the cross-modality reasoning process and alleviating the data sparsity problem. This method constructs the large-scale reasoning corpus by synthesizing pseudo heterogeneous reasoning paths from Wikipedia and generating corresponding questions. We evaluate our system on OTT-QA, a large-scale table-and-text open-domain question answering benchmark, and our system achieves the state-of-the-art performance. Further analyses illustrate that the explicit hybrid chain offers substantial performance improvement and interpretablity of the intermediate reasoning process, and the chain-centric pre-training boosts the performance on the chain extraction.
Wanjun Zhong, Junjie Huang, Qian Liu, Ming Zhou, Jiahai Wang, Jian Yin, Nan Duan
null
null
2,022
ijcai
Targeted Multimodal Sentiment Classification based on Coarse-to-Fine Grained Image-Target Matching
null
Targeted Multimodal Sentiment Classification (TMSC) aims to identify the sentiment polarities over each target mentioned in a pair of sentence and image. Existing methods to TMSC failed to explicitly capture both coarse-grained and fine-grained image-target matching, including 1) the relevance between the image and the target and 2) the alignment between visual objects and the target. To tackle this issue, we propose a new multi-task learning architecture named coarse-to-fine grained Image-Target Matching network (ITM), which jointly performs image-target relevance classification, object-target alignment, and targeted sentiment classification. We further construct an Image-Target Matching dataset by manually annotating the image-target relevance and the visual object aligned with the input target. Experiments on two benchmark TMSC datasets show that our model consistently outperforms the baselines, achieves state-of-the-art results, and presents interpretable visualizations.
Jianfei Yu, Jieming Wang, Rui Xia, Junjie Li
null
null
2,022
ijcai
High-resource Language-specific Training for Multilingual Neural Machine Translation
null
Multilingual neural machine translation (MNMT) trained in multiple language pairs has attracted considerable attention due to fewer model parameters and lower training costs by sharing knowledge among multiple languages. Nonetheless, multilingual training is plagued by language interference degeneration in shared parameters because of the negative interference among different translation directions, especially on high-resource languages. In this paper, we propose the multilingual translation model with the high-resource language-specific training (HLT-MT) to alleviate the negative interference, which adopts the two-stage training with the language-specific selection mechanism. Specifically, we first train the multilingual model only with the high-resource pairs and select the language-specific modules at the top of the decoder to enhance the translation quality of high-resource directions. Next, the model is further trained on all available corpora to transfer knowledge from high-resource languages (HRLs) to low-resource languages (LRLs). Experimental results show that HLT-MT outperforms various strong baselines on WMT-10 and OPUS-100 benchmarks. Furthermore, the analytic experiments validate the effectiveness of our method in mitigating the negative interference in multilingual training.
Jian Yang, Yuwei Yin, Shuming Ma, Dongdong Zhang, Zhoujun Li, Furu Wei
null
null
2,022
ijcai
Position-aware Joint Entity and Relation Extraction with Attention Mechanism
null
Named entity recognition and relation extraction are two important core subtasks of information extraction, which aim to identify named entities and extract relations between them. In recent years, span representation methods have received a lot of attention and are widely used to extract entities and corresponding relations from plain texts. Most recent works focus on how to obtain better span representations from pre-trained encoders, but ignore the negative impact of a large number of span candidates on slowing down the model performance. In our work, we propose a joint entity and relation extraction model with an attention mechanism and position-attentive markers. The attention score of each candidate span is calculated, and most of the candidate spans with low attention scores are pruned before being fed into the span classifier, thus achieving the goal of removing the most irrelevant spans. At the same time, in order to explore whether the position information can improve the performance of the model, we add position-attentive markers to the model. The experimental results show that our model is effective. With the same pre-trained encoder, our model achieves the new state-of-the-art on standard benchmarks (ACE05, CoNLL04 and SciERC), obtaining a 4.7%-17.8% absolute improvement in relation F1.
Chenglong Zhang, Shuyong Gao, Haofen Wang, Wenqiang Zhang
null
null
2,022
ijcai
CauAIN: Causal Aware Interaction Network for Emotion Recognition in Conversations
null
Emotion Recognition in Conversations has attained increasing interest in the natural language processing community. Many neural-network based approaches endeavor to solve the challenge of emotional dynamics in conversations and gain appealing results. However, these works are limited in capturing deep emotional clues in conversational context because they ignore the emotion cause that could be viewed as stimulus to the target emotion. In this work, we propose Causal Aware Interaction Network (CauAIN) to thoroughly understand the conversational context with the help of emotion cause detection. Specifically, we retrieve causal clues provided by commonsense knowledge to guide the process of causal utterance traceback. Both retrieve and traceback steps are performed from the perspective of intra- and inter-speaker interaction simultaneously. Experimental results on three benchmark datasets show that our model achieves better performance over most baseline models.
Weixiang Zhao, Yanyan Zhao, Xin Lu
null
null
2,022
ijcai
Clickbait Detection via Contrastive Variational Modelling of Text and Label
null
Clickbait refers to deliberately created sensational or deceptive text for tricking readers into clicking, which severely hurts the web ecosystem. With a growing number of clickbaits on social media, developing automatic detection methods becomes essential. Nonetheless, the performance of existing neural classifiers is limited due to the underutilization of small labelled datasets. Inspired by related pedagogy theories that learning to write can promote comprehension ability, we propose a novel Contrastive Variational Modelling (CVM) framework to exploit the labelled data better. CVM models the conditional distributions of text and clickbait labels by predicting labels from text and generating text from labels simultaneously with Variational AutoEncoder and further differentiates the learned spaces under each label by a mixed contrastive learning loss. In this way, CVM can capture more underlying textual properties and hence utilize label information to its full potential, boosting detection performance. We theoretically demonstrate CVM as learning a joint distribution of text, clickbait label, and latent variable. Experiments on three clickbait detection datasets show our method's robustness to inadequate and biased labels, outperforming several recent strong baselines.
Xiaoyuan Yi, Jiarui Zhang, Wenhao Li, Xiting Wang, Xing Xie
null
null
2,022
ijcai
SyntaSpeech: Syntax-Aware Generative Adversarial Text-to-Speech
null
The recent progress in non-autoregressive text-to-speech (NAR-TTS) has made fast and high-quality speech synthesis possible. However, current NAR-TTS models usually use phoneme sequence as input and thus cannot understand the tree-structured syntactic information of the input sequence, which hurts the prosody modeling. To this end, we propose SyntaSpeech, a syntax-aware and light-weight NAR-TTS model, which integrates tree-structured syntactic information into the prosody modeling modules in PortaSpeech. Specifically, 1) We build a syntactic graph based on the dependency tree of the input sentence, then process the text encoding with a syntactic graph encoder to extract the syntactic information. 2) We incorporate the extracted syntactic encoding with PortaSpeech to improve the prosody prediction. 3) We introduce a multi-length discriminator to replace the flow-based post-net in PortaSpeech, which simplifies the training pipeline and improves the inference speed, while keeping the naturalness of the generated audio. Experiments on three datasets not only show that the tree-structured syntactic information grants SyntaSpeech the ability to synthesize better audio with expressive prosody, but also demonstrate the generalization ability of SyntaSpeech to adapt to multiple languages and multi-speaker text-to-speech. Ablation studies demonstrate the necessity of each component in SyntaSpeech. Source code and audio samples are available at https://syntaspeech.github.io.
Zhenhui Ye, Zhou Zhao, Yi Ren, Fei Wu
null
null
2,022
ijcai
“Think Before You Speak”: Improving Multi-Action Dialog Policy by Planning Single-Action Dialogs
null
Multi-action dialog policy (MADP), which generates multiple atomic dialog actions per turn, has been widely applied in task-oriented dialog systems to provide expressive and efficient system responses. Existing MADP models usually imitate action combinations from the labeled multi-action dialog samples. Due to data limitations, they generalize poorly toward unseen dialog flows. While interactive learning and reinforcement learning algorithms can be applied to incorporate external data sources of real users and user simulators, they take significant manual effort to build and suffer from instability. To address these issues, we propose Planning Enhanced Dialog Policy (PEDP), a novel multi-task learning framework that learns single-action dialog dynamics to enhance multi-action prediction. Our PEDP method employs model-based planning for conceiving what to express before deciding the current response through simulating single-action dialogs. Experimental results on the MultiWOZ dataset demonstrate that our fully supervised learning-based method achieves a solid task success rate of 90.6%, improving 3% compared to the state-of-the-art methods. The source code and the appendix of this paper can be obtained from https://github.com/ShuoZhangXJTU/PEDP.
Shuo Zhang, Junzhou Zhao, Pinghui Wang, Yu Li, Yi Huang, Junlan Feng
null
null
2,022
ijcai
EditSinger: Zero-Shot Text-Based Singing Voice Editing System with Diverse Prosody Modeling
null
Zero-shot text-based singing editing enables singing voice modification based on the given edited lyrics without any additional data from the target singer. However, due to the different demands, challenges occur when applying existing speech editing methods to singing voice editing task, mainly including the lack of systematic consideration concerning prosody in insertion and deletion, as well as the trade-off between the naturalness of pronunciation and the preservation of prosody in replacement. In this paper we propose EditSinger, which is a novel singing voice editing model with specially designed diverse prosody modules to overcome the challenges above. Specifically, 1) a general masked variance adaptor is introduced for the comprehensive prosody modeling of the inserted lyrics and the transition of deletion boundary; and 2) we further design a fusion pitch predictor for replacement. By disentangling the reference pitch and fusing the predicted pronunciation, the edited pitch can be reconstructed, which could ensure a natural pronunciation while preserving the prosody of the original audio. In addition, to the best of our knowledge, it is the first zero-shot text-based singing voice editing system. Our experiments conducted on the OpenSinger prove that EditSinger can synthesize high-quality edited singing voices with natural prosody according to the corresponding operations.
Lichao Zhang, Zhou Zhao, Yi Ren, Liqun Deng
null
null
2,022
ijcai
Grape: Grammar-Preserving Rule Embedding
null
Word embedding has been widely used in various areas to boost the performance of the neural models. However, when processing context-free languages, embedding grammar rules with word embedding loses two types of information. One is the structural relationship between the grammar rules, and the other one is the content information of the rule definition. In this paper, we make the first attempt to learn a grammar-preserving rule embedding. We first introduce a novel graph structure to represent the context-free grammar. Then, we apply a Graph Neural Network (GNN) to extract the structural information and use a gating layer to integrate content information. We conducted experiments on six widely-used benchmarks containing four context-free languages. The results show that our approach improves the accuracy of the base model by 0.8 to 6.4 percentage points. Furthermore, Grape also achieves 1.6 F1 score improvement on the method naming task which shows the generality of our approach.
Qihao Zhu, Zeyu Sun, Wenjie Zhang, Yingfei Xiong, Lu Zhang
null
null
2,022
ijcai
None Class Ranking Loss for Document-Level Relation Extraction
null
Document-level relation extraction (RE) aims at extracting relations among entities expressed across multiple sentences, which can be viewed as a multi-label classification problem. In a typical document, most entity pairs do not express any pre-defined relation and are labeled as "none" or "no relation". For good document-level RE performance, it is crucial to distinguish such none class instances (entity pairs) from those of pre-defined classes (relations). However, most existing methods only estimate the probability of pre-defined relations independently without considering the probability of "no relation". This ignores the context of entity pairs and the label correlations between the none class and pre-defined classes, leading to sub-optimal predictions. To address this problem, we propose a new multi-label loss that encourages large margins of label confidence scores between each pre-defined class and the none class, which enables captured label correlations and context-dependent thresholding for label prediction. To gain further robustness against positive-negative imbalance and mislabeled data that could appear in real-world RE datasets, we propose a margin regularization and a margin shifting technique. Experimental results demonstrate that our method significantly outperforms existing multi-label losses for document-level RE and works well in other multi-label tasks such as emotion classification when none class instances are available for training.
Yang Zhou, Wee Sun Lee
null
null
2,022
ijcai
Contrastive Graph Transformer Network for Personality Detection
null
Personality detection is to identify the personality traits underlying social media posts. Most of the existing work is mainly devoted to learning the representations of posts based on labeled data. Yet the ground-truth personality traits are collected through time-consuming questionnaires. Thus, one of the biggest limitations lies in the lack of training data for this data-hungry task. In addition, the correlations among traits should be considered since they are important psychological cues that could help collectively identify the traits. In this paper, we construct a fully-connected post graph for each user and develop a novel Contrastive Graph Transformer Network model (CGTN) which distills potential labels of the graphs based on both labeled and unlabeled data. Specifically, our model first explores a self-supervised Graph Neural Network (GNN) to learn the post embeddings. We design two types of post graph augmentations to incorporate different priors based on psycholinguistic knowledge of Linguistic Inquiry and Word Count (LIWC) and post semantics. Then, upon the post embeddings of the graph, a Transformer-based decoder equipped with post-to-trait attention is exploited to generate traits sequentially. Experiments on two standard datasets demonstrate that our CGTN outperforms the state-of-the-art methods for personality detection.
Yangfu Zhu, Linmei Hu, Xinkai Ge, Wanrong Peng, Bin Wu
null
null
2,022
ijcai
Adaptive Information Belief Space Planning
null
Reasoning about uncertainty is vital in many real-life autonomous systems. However, current state-of-the-art planning algorithms either cannot reason about uncertainty explicitly, or do so with high computational burden. Here, we focus on making informed decisions efficiently, using reward functions that explicitly deal with uncertainty. We formulate an approximation, namely an abstract observation model, that uses an aggregation scheme to alleviate computational costs. We derive bounds on the expected information-theoretic reward function and, as a consequence, on the value function. We then propose a method to refine aggregation to achieve identical action selection in a fraction of the computational time.
Moran Barenboim, Vadim Indelman
null
null
2,022
ijcai
Scheduling with Untrusted Predictions
null
Using machine-learned predictions to create algorithms with better approximation guarantees is a very fresh and active field. In this work, we study classic scheduling problems under the learning augmented setting. More specifically, we consider the problem of scheduling jobs with arbitrary release dates on a single machine and the problem of scheduling jobs with a common release date on multiple machines. Our objective is to minimize the sum of completion times. For both problems, we propose algorithms which use predictions for taking their decisions. Our algorithms are consistent -- i.e. when the predictions are accurate, the performances of our algorithms are close to those of an optimal offline algorithm--, and robust -- i.e. when the predictions are wrong, the performance of our algorithms are close to those of an online algorithm without predictions. In addition, we confirm the above theoretical bounds by conducting experimental evaluation comparing the proposed algorithms to the offline optimal ones for both the single and multiple machines settings.
Evripidis Bampis, Konstantinos Dogeas, Alexander Kononov, Giorgio Lucarelli, Fanny Pascual
null
null
2,022
ijcai
Explaining the Behaviour of Hybrid Systems with PDDL+ Planning
null
The aim of this work is to explain the observed behaviour of a hybrid system (HS). The explanation problem is cast as finding a trajectory of the HS that matches some observations. By using the formalism of hybrid automata (HA), we characterize the explanations as the language of a network of HA that comprises one automaton for the HS and another one for the observations, thus restricting the behaviour of the HS exclusively to trajectories that explain the observations. We observe that this problem corresponds to a reachability problem in model-checking, but that state-of-the-art model checkers struggle to find concrete trajectories. To overcome this issue we provide a formal mapping from HA to PDDL+ and show how to use an off-the-shelf automated planner. An experimental analysis over domains with piece-wise constant, linear and nonlinear dynamics reveals that the proposed PDDL+ approach is much more efficient than solving directly the explanation problem with model-checking solvers.
Diego Aineto, Eva Onaindia, Miquel Ramirez, Enrico Scala, Ivan Serina
null
null
2,022
ijcai
Offline Time-Independent Multi-Agent Path Planning
null
This paper studies a novel planning problem for multiple agents that cannot share holding resources, named OTIMAPP (Offline Time-Independent Multi-Agent Path Planning). Given a graph and a set of start-goal pairs, the problem consists in assigning a path to each agent such that every agent eventually reaches their goal without blocking each other, regardless of how the agents are being scheduled at runtime. The motivation stems from the nature of distributed environments that agents take actions fully asynchronous and have no knowledge about those exact timings of other actors. We present solution conditions, computational complexity, solvers, and robotic applications.
Keisuke Okumura, François Bonnet, Yasumasa Tamura, Xavier Défago
null
null
2,022
ijcai
Shared Autonomy Systems with Stochastic Operator Models
null
We consider shared autonomy systems where multiple operators (AI and human), can interact with the environment, e.g. by controlling a robot. The decision problem for the shared autonomy system is to select which operator takes control at each timestep, such that a reward specifying the intended system behaviour is maximised. The performance of the human operator is influenced by unobserved factors, such as fatigue or skill level. Therefore, the system must reason over stochastic models of operator performance. We present a framework for stochastic operators in shared autonomy systems (SO-SAS), where we represent operators using rich, partially observable models. We formalise SO-SAS as a mixed-observability Markov decision process, where environment states are fully observable and internal operator states are hidden. We test SO-SAS on a simulated domain and a computer game, empirically showing it results in better performance compared to traditional formulations of shared autonomy systems.
Clarissa Costen, Marc Rigter, Bruno Lacerda, Nick Hawes
null
null
2,022
ijcai
Planning with Qualitative Action-Trajectory Constraints in PDDL
null
In automated planning the ability of expressing constraints on the structure of the desired plans is important to deal with solution quality, as well as to express control knowledge. In PDDL3, this is supported through state-trajectory constraints corresponding to a class of LTLf formulae. In this paper, first we introduce a formalism to express trajectory constraints over actions in the plan, rather than over traversed states; Then we investigate compilation-based methods to deal with such constraints in propositional planning, and propose a new simple effective method. Finally, we experimentally study the usefulness of our action-trajectory constraints as a tool to express control knowledge. The experimental results show that the performance of a classical planner can be significantly improved by exploiting knowledge expressed by action constraints and handled by our compilation method, while the same knowledge turns out to be less beneficial when specified as state constraints and handled by two state-of-the-art systems supporting state constraints.
Luigi Bonassi, Alfonso Emilio Gerevini, Enrico Scala
null
null
2,022
ijcai
Explaining Soft-Goal Conflicts through Constraint Relaxations
null
Recent work suggests to explain trade-offs between soft-goals in terms of their conflicts, i.e., minimal unsolvable soft-goal subsets. But this does not explain the conflicts themselves: Why can a given set of soft-goals not be jointly achieved? Here we approach that question in terms of the underlying constraints on plans in the task at hand, namely resource availability and time windows. In this context, a natural form of explanation for a soft-goal conflict is a minimal constraint relaxation under which the conflict disappears (``if the deadline was 1 hour later, it would work''). We explore algorithms for computing such explanations. A baseline is to simply loop over all relaxed tasks and compute the conflicts for each separately. We improve over this by two algorithms that leverage information -- conflicts, reachable states -- across relaxed tasks. We show that these algorithms can exponentially outperform the baseline in theory, and we run experiments confirming that advantage in practice.
Rebecca Eifler, Jeremy Frank, Jörg Hoffmann
null
null
2,022
ijcai
Online Planning in POMDPs with Self-Improving Simulators
null
How can we plan efficiently in a large and complex environment when the time budget is limited? Given the original simulator of the environment, which may be computationally very demanding, we propose to learn online an approximate but much faster simulator that improves over time. To plan reliably and efficiently while the approximate simulator is learning, we develop a method that adaptively decides which simulator to use for every simulation, based on a statistic that measures the accuracy of the approximate simulator. This allows us to use the approximate simulator to replace the original simulator for faster simulations when it is accurate enough under the current context, thus trading off simulation speed and accuracy. Experimental results in two large domains show that when integrated with POMCP, our approach allows to plan with improving efficiency over time.
Jinke He, Miguel Suau, Hendrik Baier, Michael Kaisers, Frans A. Oliehoek
null
null
2,022
ijcai
Tight Bounds for Hybrid Planning
null
Several hierarchical planning systems feature a rich level of language features making them capable of expressing real-world problems. One such feature that's used by several current planning systems is causal links, which are used to track search progress. The formalism combining Hierarchical Task Network (HTN) planning with these links known from Partial Order Causal Link (POCL) planning is often referred to as hybrid planning. In this paper we study the computational complexity of such hybrid planning problems. More specifically, we provide missing membership results to existing hardness proofs and thereby provide tight complexity bounds for all known subclasses of hierarchical planning problems. We also re-visit and correct a result from the literature for plan verification showing that it remains NP-complete even in the absence of a task hierarchy.
Pascal Bercher, Songtuan Lin, Ron Alford
null
null
2,022
ijcai
An Efficient Approach to Data Transfer Scheduling for Long Range Space Exploration
null
Long range space missions, such as Rosetta, require robust plans of data-acquisition activities and of the resulting data transfers. In this paper we revisit the problem of assigning priorities to data transfers in order to maximize safety margin of onboard memory. We propose a fast sweep algorithm to verify the feasibility of a given priority assignment and we introduce an efficient exact algorithm to assign priorities on a single downlink window. We prove that the problem is NP-hard for several windows, and we propose several randomized heuristics to tackle the general case. Our experimental results show that the proposed approaches are able to improve the plans computed for the real mission by the previously existing method, while the sweep algorithm yields drastic accelerations.
Emmanuel Hebrard, Christian Artigues, Pierre Lopez, Arnaud Lusson, Steve Chien, Adrien Maillard, Gregg Rabideau
null
null
2,022
ijcai
On the Computational Complexity of Model Reconciliations
null
Model-reconciliation explanation is a popular framework for generating explanations for planning problems. While the framework has been extended to multiple settings since its introduction for classical planning problems, there is little agreement on the computational complexity of generating minimal model reconciliation explanations in the basic setting. In this paper, we address this lacuna by introducing a decision-version of the model-reconciliation explanation generation problem and we show that it is Sigma-2-P Complete.
Sarath Sreedharan, Pascal Bercher, Subbarao Kambhampati
null
null
2,022
ijcai
Landmark Heuristics for Lifted Classical Planning
null
While state-of-the-art planning systems need a grounded (propositional) task representation, the input model is provided "lifted", specifying predicates and action schemas with variables over a finite object universe. The size of the grounded model is exponential in predicate/action-schema arity, limiting applicability to cases where it is small enough. Recent work has taken up this challenge, devising an effective lifted forward search planner as basis for lifted heuristic search, as well as a variety of lifted heuristic functions based on the delete relaxation. Here we add a novel family of lifted heuristic functions, based on landmarks. We design two methods for landmark extraction in the lifted setting. The resulting heuristics exhibit performance advantages over previous heuristics in several benchmark domains. Especially the combination with lifted delete relaxation heuristics to a LAMA-style planner yields good results, beating the previous state of the art in lifted planning.
Julia Wichlacz, Daniel Höller, Jörg Hoffmann
null
null
2,022
ijcai
General Optimization Framework for Recurrent Reachability Objectives
null
We consider the mobile robot path planning problem for a class of recurrent reachability objectives. These objectives are parameterized by the expected time needed to visit one position from another, the expected square of this time, and also the frequency of moves between two neighboring locations. We design an efficient strategy synthesis algorithm for recurrent reachability objectives and demonstrate its functionality on non-trivial instances.
David Klaska, Antonin Kucera, Vit Musil, Vojtech Rehak
null
null
2,022
ijcai
PG3: Policy-Guided Planning for Generalized Policy Generation
null
A longstanding objective in classical planning is to synthesize policies that generalize across multiple problems from the same domain. In this work, we study generalized policy search-based methods with a focus on the score function used to guide the search over policies. We demonstrate limitations of two score functions --- policy evaluation and plan comparison --- and propose a new approach that overcomes these limitations. The main idea behind our approach, Policy-Guided Planning for Generalized Policy Generalization (PG3), is that a candidate policy should be used to guide planning on training problems as a mechanism for evaluating that candidate. Theoretical results in a simplified setting give conditions under which PG3 is optimal or admissible. We then study a specific instantiation of policy search where planning problems are PDDL-based and policies are lifted decision lists. Empirical results in six domains confirm that PG3 learns generalized policies more efficiently and effectively than several baselines.
Ryan Yang, Tom Silver, Aidan Curtis, Tomas Lozano-Perez, Leslie Kaelbling
null
null
2,022
ijcai
Dynamic Car Dispatching and Pricing: Revenue and Fairness for Ridesharing Platforms
null
A major challenge for ridesharing platforms is to guarantee profit and fairness simultaneously, especially in the presence of misaligned incentives of drivers and riders. We focus on the dispatching-pricing problem to maximize the total revenue while keeping both drivers and riders satisfied. We study the computational complexity of the problem, provide a novel two-phased pricing solution with revenue and fairness guarantees, extend it to stochastic settings and develop a dynamic (a.k.a., learning-while-doing) algorithm that actively collects data to learn the demand distribution during the scheduling process. We also conduct extensive experiments to demonstrate the effectiveness of our algorithms.
Zishuo Zhao, Xi Chen, Xuefeng Zhang, Yuan Zhou
null
null
2,022
ijcai
Robust Subset Selection by Greedy and Evolutionary Pareto Optimization
null
Subset selection, which aims to select a subset from a ground set to maximize some objective function, arises in various applications such as influence maximization and sensor placement. In real-world scenarios, however, one often needs to find a subset which is robust against (i.e., is good over) a number of possible objective functions due to uncertainty, resulting in the problem of robust subset selection. This paper considers robust subset selection with monotone objective functions, relaxing the submodular property required by previous studies. We first show that the greedy algorithm can obtain an approximation ratio with respect to the correlation and submodularity ratios of the objective functions; and then propose EPORSS, an evolutionary Pareto optimization algorithm that can utilize more time to find better subsets. We prove that EPORSS can also be theoretically grounded, achieving a similar approximation guarantee to the greedy algorithm. In addition, we derive the lower bound of the correlation ratio for the application of robust influence maximization, and further conduct experiments to validate the performance of the greedy algorithm and EPORSS.
Chao Bian, Yawen Zhou, Chao Qian
null
null
2,022
ijcai
Generalisation of Alpha-Beta Search for AND-OR Graphs With Partially Ordered Values
null
We define a new setting related to the evaluation of AND-OR directed acyclic graphs with partially ordered values. Such graphs arise naturally when solving games with incomplete information (e.g. most card games such as Bridge) or games with multiple criteria. In particular, this setting generalises standard AND-OR graph evaluation and computation of optimal strategies in games with complete information. Under this setting, we propose a new algorithm which uses both alpha-beta pruning and cached values. In this paper, we present our algorithm, prove its correctness, and give experimental results on a card game with incomplete information.
Junkang Li, Bruno Zanuttini, Tristan Cazenave, Véronique Ventos
null
null
2,022
ijcai
An Online Learning Approach towards Far-sighted Emergency Relief Planning under Intentional Attacks in Conflict Areas
null
A large number of emergency humanitarian rescue demands in conflict areas around the world are accompanied by intentional, persistent and unpredictable attacks on rescuers and supplies. Unfortunately, existing work on humanitarian relief planning mostly ignores this challenge in reality resulting a parlous and short-sighted relief distribution plan to a large extent. To address this, we first propose an offline multi-stage optimization problem of emergency relief planning under intentional attacks, in which all parameters in the game between the rescuer and attacker are supposed to be known or predictable. Then, an online version of this problem is introduced to meet the need of online and irrevocable decision making when those parameters are revealed in an online fashion. To achieve a far-sighted emergency relief planning under attacks, we design an online learning approach which is proven to obtain a near-optimal solution of the offline problem when those online reveled parameters are i.i.d. sampled from an unknown distribution. Finally, extensive experiments on a real anti-Ebola relief planning case based on the data of Ebola outbreak and armed attacks in DRC Congo show the scalability and effectiveness of our approach.
Haoyu Yang, Kaiming Xiao, Lihua Liu, Hongbin Huang, Weiming Zhang
null
null
2,022
ijcai
Competitive Analysis for Multi-Commodity Ski-Rental Problem
null
We investigate an extended version of the classical ski-rental problem with multiple commodities. A customer uses a set of commodities altogether, and he/she needs to choose payment options to cover the usage of each commodity without the knowledge of the future. The payment options of each commodity include (1) renting: to pay for an on-demand usage and (2) buying: to pay for the lifetime usage. It is a novel extension of the classical ski-rental problem which deals with only one commodity. To address this problem, we propose a new online algorithm called the Multi-Object Break-Even (MOBE) algorithm and conduct competitive analysis. We show that the tight lower and upper bounds of MOBE algorithm's competitive ratio are e/e-1 and 2 respectively against adaptive adversary under arbitrary renting and buying prices. We further prove that MOBE algorithm is an optimal online algorithm if commodities have the same rent-to-buy ratio. Numerical results verify our theoretical conclusion and demonstrate the advantages of MOBE in a real-world scenario.
Binghan Wu, Wei Bao, Dong Yuan, Bing Zhou
null
null
2,022
ijcai
A Native Qualitative Numeric Planning Solver Based on AND/OR Graph Search
null
Qualitative numeric planning (QNP) is classical planning extended with non-negative real variables that can be increased or decreased by some arbitrary amount. Existing approaches for solving QNP problems are exclusively based on compilation to fully observable nondeterministic planning (FOND) problems or FOND+ problems, i.e., FOND problems with explicit fairness assumptions. However, the FOND-compilation approaches suffer from some limitations, such as difficulties to generate all strong cyclic solutions for FOND problems or introducing a great many extra variables and actions. In this paper, we propose a simpler characterization of QNP solutions and a new approach to solve QNP problems based on directly searching for a solution, which is a closed and terminating subgraph that contains a goal node, in the AND/OR graphs induced by QNP problems. Moreover, we introduce a pruning strategy based on termination tests on subgraphs. We implemented a native solver DSET based on the proposed approach and compared the performance of it with that of the two compilation-based approaches. Experimental results show that DSET is faster than the FOND-compilation approach by one order of magnitude, and comparable with the FOND+-compilation approach.
Hemeng Zeng, Yikun Liang, Yongmei Liu
null
null
2,022
ijcai
Multi-robot Task Allocation in the Environment with Functional Tasks
null
Multi-robot task allocation (MRTA) problem has long been a key issue in multi-robot systems. Previous studies usually assumed that the robots must complete all tasks with minimum time cost. However, in many real situations, some tasks can be selectively performed by robots and will not limit the achievement of the goal. Instead, completing these tasks will cause some functional effects, such as decreasing the time cost of completing other tasks. This kind of task can be called “functional task”. This paper studies the multi-robot task allocation in the environment with functional tasks. In the problem, neither allocating all functional tasks nor allocating no functional task is always optimal. Previous algorithms usually allocate all tasks and cannot suitably select the functional tasks. Because of the interaction and sequential influence, the total effects of the functional tasks are too complex to exactly calculate. We fully analyze this problem and then design a heuristic algorithm. The heuristic algorithm scores the functional tasks referring to linear threshold model (used to analyze the sequential influence of a functional task). The simulated experiments demonstrate that the heuristic algorithm can outperform the benchmark algorithms.
Fuhan Yan, Kai Di
null
null
2,022
ijcai
Learning and Exploiting Progress States in Greedy Best-First Search
null
Previous work introduced the concept of progress states. After expanding a progress state, a greedy best-first search (GBFS) will only expand states with lower heuristic values. Current methods can identify progress states only for a single task and only after a solution for the task has been found. We introduce a novel approach that learns a description logic formula characterizing all progress states in a classical planning domain. Using the learned formulas in a GBFS to break ties in favor of progress states often significantly reduces the search effort.
Patrick Ferber, Liat Cohen, Jendrik Seipp, Thomas Keller
null
null
2,022
ijcai
A Closed-Loop Perception, Decision-Making and Reasoning Mechanism for Human-Like Navigation
null
Reliable navigation systems have a wide range of applications in robotics and autonomous driving. Current approaches employ an open-loop process that converts sensor inputs directly into actions. However, these open-loop schemes are challenging to handle complex and dynamic real-world scenarios due to their poor generalization. Imitating human navigation, we add a reasoning process to convert actions back to internal latent states, forming a two-stage closed loop of perception, decision-making, and reasoning. Firstly, VAE-Enhanced Demonstration Learning endows the model with the understanding of basic navigation rules. Then, two dual processes in RL-Enhanced Interaction Learning generate reward feedback for each other and collectively enhance obstacle avoidance capability. The reasoning model can substantially promote generalization and robustness, and facilitate the deployment of the algorithm to real-world robots without elaborate transfers. Experiments show our method is more adaptable to novel scenarios compared with state-of-the-art approaches.
Wenqi Zhang, Kai Zhao, Peng Li, Xiao Zhu, Yongliang Shen, Yanna Ma, Yingfeng Chen, Weiming Lu
null
null
2,022
ijcai
Efficient Budgeted Graph Search
null
Iterative Budgeted Exponential Search (IBEX) is a general search algorithm that can limit the number of re-expansions performed in common problems like iterative-deepening tree search and search with inconsistent heuristics. IBEX has been adapted into a specific tree algorithm, Budgeted Tree Search (BTS), which behaves like IDA* when the problem instance is well-behaved but keeps the worst-case guarantees when problems are not well-behaved. The analogous algorithms on graphs, Budgeted Graph Search (BGS), do not have these same properties. This paper reformulates BGS into Efficient Budgeted Graph Search (BGSe), showing how to implement the algorithm so that it behaves identically to A* when problems are well-behaved, and retains the best-case performance otherwise. Experimental results validate the performance of BGSe on a range of theoretical and practical problem instances.
Jasmeet Kaur, Nathan R. Sturtevant
null
null
2,022
ijcai
Completeness and Diversity in Depth-First Proof-Number Search with Applications to Retrosynthesis
null
We revisit Depth-First Proof-Number Search (DFPN), a well-known algorithm for solving two-player games. First, we consider the completeness property of the algorithm and its variants, i.e., whether they always find a winning strategy when there exists one. While it is known that the standard version is not complete, we show that the combination with the simple Threshold Controlling Algorithm is complete, solving an open problem from the area. Second, we modify DFPN to compute a diverse set of solutions rather than just a single one. Finally, we apply this new variant in Chemistry to the synthesis planning of new target molecules (Retrosynthesis). In this domain a diverse set of many solutions is desirable. We apply additional modifications from the literature to the algorithm and show that it outperforms Monte-Carlo Tree-Search, another well-known algorithm for the same problem, according to a natural diversity measure.
Christopher Franz, Georg Mogk, Thomas Mrziglod, Kevin Schewior
null
null
2,022
ijcai
Budgeted Sequence Submodular Maximization
null
The problem of selecting a sequence of items that maximizes a given submodular function appears in many real-world applications. Existing study on the problem only considers uniform costs over items, but non-uniform costs on items are more general. Taking this cue, we study the problem of budgeted sequence submodular maximization (BSSM), which introduces non-uniform costs of items into the sequence selection. This problem can be found in a number of applications such as movie recommendation, course sequence design and so on. Non-uniform costs on items significantly increase the solution complexity and we prove that BSSM is NP-hard. To solve the problem, we first propose a greedy algorithm GBM with an error bound. We also design an anytime algorithm POBM based on Pareto optimization to improve the quality of solutions. Moreover, we prove that POBM can obtain approximate solutions in expected polynomial running time, and converges faster than a state-of-the-art algorithm POSEQSEL for sequence submodular maximization with cardinality constraints. We further introduce optimizations to speed up POBM. Experimental results on both synthetic and real-world datasets demonstrate the performance of our new algorithms.
Xuefeng Chen, Liang Feng, Xin Cao, Yifeng Zeng, Yaqing Hou
null
null
2,022
ijcai
Large Neighborhood Search with Decision Diagrams
null
Local search is a popular technique to solve combinatorial optimization problems efficiently. To escape local minima one generally uses metaheuristics or try to design large neighborhoods around the current best solution. A somewhat more black box approach consists in using an optimization solver to explore a large neighborhood. This is the large-neighborhood search (LNS) idea that we reuse in this work. We introduce a generic neighborhood exploration algorithm based on restricted decision diagrams (DD) constructed from the current best solution. We experiment DD-LNS on two sequencing problems: the traveling salesman problem with time windows (TSPTW) and a production planning problem (DLSP). Despite its simplicity, DD-LNS is competitive with the state-of-the-art MIP approach on DLSP. It is able to improve the best known solutions of some standard instances for TSPTW and even to prove the optimality of quite a few other instances.
Xavier Gillard, Pierre Schaus
null
null
2,022
ijcai
Reinforcement Learning for Cross-Domain Hyper-Heuristics
null
In this paper, we propose a new hyper-heuristic approach that uses reinforcement learning to automatically learn the selection of low-level heuristics across a wide range of problem domains. We provide a detailed analysis and evaluation of the algorithm components, including different ways to represent the hyper-heuristic state space and reset strategies to avoid unpromising areas of the solution space. Our methods have been evaluated using HyFlex, a well-known benchmarking framework for cross-domain hyper-heuristics, and compared with state-of-the-art approaches. The experimental evaluation shows that our reinforcement-learning based approach produces results that are competitive with the state-of-the-art, including the top participants of the Cross Domain Hyper-heuristic Search Competition 2011.
Florian Mischek, Nysret Musliu
null
null
2,022
ijcai
Runtime Analysis of Single- and Multi-Objective Evolutionary Algorithms for Chance Constrained Optimization Problems with Normally Distributed Random Variables
null
Chance constrained optimization problems allow to model problems where constraints involving stochastic components should only be violated with a small probability. Evolutionary algorithms have been applied to this scenario and shown to achieve high quality results. With this paper, we contribute to the theoretical understanding of evolutionary algorithms for chance constrained optimization. We study the scenario of stochastic components that are independent and Normally distributed. Considering the simple single-objective (1+1)~EA, we show that imposing an additional uniform constraint already leads to local optima for very restricted scenarios and an exponential optimization time. We therefore introduce a multi-objective formulation of the problem which trades off the expected cost and its variance. We show that multi-objective evolutionary algorithms are highly effective when using this formulation and obtain a set of solutions that contains an optimal solution for any possible confidence level imposed on the constraint. Furthermore, we prove that this approach can also be used to compute a set of optimal solutions for the chance constrained minimum spanning tree problem. Experimental investigations on instances of the NP-hard stochastic minimum weight dominating set problem confirm the benefit of the multi-objective approach in practice.
Frank Neumann, Carsten Witt
null
null
2,022
ijcai
Efficient Neural Neighborhood Search for Pickup and Delivery Problems
null
We present an efficient Neural Neighborhood Search (N2S) approach for pickup and delivery problems (PDPs). In specific, we design a powerful Synthesis Attention that allows the vanilla self-attention to synthesize various types of features regarding a route solution. We also exploit two customized decoders that automatically learn to perform removal and reinsertion of a pickup-delivery node pair to tackle the precedence constraint. Additionally, a diversity enhancement scheme is leveraged to further ameliorate the performance. Our N2S is generic, and extensive experiments on two canonical PDP variants show that it can produce state-of-the-art results among existing neural methods. Moreover, it even outstrips the well-known LKH3 solver on the more constrained PDP variant. Our implementation for N2S is available online.
Yining Ma, Jingwen Li, Zhiguang Cao, Wen Song, Hongliang Guo, Yuejiao Gong, Yeow Meng Chee
null
null
2,022
ijcai
Efficient Algorithms for Monotone Non-Submodular Maximization with Partition Matroid Constraint
null
In this work, we study the problem of monotone non-submodular maximization with partition matroid constraint. Although a generalization of this problem has been studied in literature, our work focuses on leveraging properties of partition matroid constraint to (1) propose algorithms with theoretical bound and efficient query complexity; and (2) provide better analysis on theoretical performance guarantee of some existing techniques. We further investigate those algorithms' performance in two applications: Boosting Influence Spread and Video Summarization. Experiments show our algorithms return comparative results to the state-of-the-art algorithms while taking much fewer queries.
Lan N. Nguyen, My T. Thai
null
null
2,022
ijcai
Neural Network Pruning by Cooperative Coevolution
null
Neural network pruning is a popular model compression method which can significantly reduce the computing cost with negligible loss of accuracy. Recently, filters are often pruned directly by designing proper criteria or using auxiliary modules to measure their importance, which, however, requires expertise and trial-and-error. Due to the advantage of automation, pruning by evolutionary algorithms (EAs) has attracted much attention, but the performance is limited for deep neural networks as the search space can be quite large. In this paper, we propose a new filter pruning algorithm CCEP by cooperative coevolution, which prunes the filters in each layer by EAs separately. That is, CCEP reduces the pruning space by a divide-and-conquer strategy. The experiments show that CCEP can achieve a competitive performance with the state-of-the-art pruning methods, e.g., prune ResNet56 for 63.42% FLOPs on CIFAR10 with -0.24% accuracy drop, and ResNet50 for 44.56% FLOPs on ImageNet with 0.07% accuracy drop.
Haopu Shang, Jia-Liang Wu, Wenjing Hong, Chao Qian
null
null
2,022
ijcai
Real-Time Heuristic Search with LTLf Goals
null
In Real-Time Heuristic Search (RTHS) we are given a search graph G, a heuristic, and the objective is to find a path from a given start node to a given goal node in G. As such, one does not impose any trajectory constraints on the path, besides reaching the goal. In this paper we consider a version of RTHS in which temporally extended goals can be defined on the form of the path. Such goals are specified in Linear Temporal Logic over Finite Traces (LTLf), an expressive language that has been considered in many other frameworks, such as Automated Planning, Synthesis, and Reinforcement Learning, but has not yet been studied in the context of RTHS. We propose a general automata-theoretic approach for RTHS, whereby LTLf goals are supported as the result of searching over the cross product of the search graph and the automaton for the LTLf goal; specifically, we describe LTL-LRTA*, a version of LSS-LRTA*. Second, we propose an approach to produce heuristics for LTLf goals, based on existing goal-dependent heuristics. Finally, we propose a greedy strategy for RTHS with LTLf goals, which focuses search to make progress over the structure of the automaton; this yields LTL-LRTA*+A. In our experimental evaluation over standard benchmarks we show LTL-LRTA*+A may outperform LTL-LRTA* substantially for a variety of LTLf goals.
Jaime Middleton, Rodrigo Toro Icarte, Jorge Baier
null
null
2,022
ijcai
HEA-D: A Hybrid Evolutionary Algorithm for Diversified Top-k Weight Clique Search Problem
null
The diversified top-k weight clique (DTKWC) search problem is an important generalization of the diversified top-k clique (DTKC) search problem with extensive applications, which extends the DTKC search problem by taking into account the weight of vertices. In this paper, we formulate DTKWC search problem using mixed integer linear program constraints and propose an efficient hybrid evolutionary algorithm (HEA-D) that combines a clique-based crossover operator and an effective simulated annealing-based local optimization procedure to find high-quality local optima. The experimental results show that HEA-D performs much better than the existing methods on two representative real-world benchmarks.
Jun Wu, Chu-Min Li, Yupeng Zhou, Minghao Yin, Xin Xu, Dangdang Niu
null
null
2,022
ijcai
Summary Markov Models for Event Sequences
null
Datasets involving sequences of different types of events without meaningful time stamps are prevalent in many applications, for instance when extracted from textual corpora. We propose a family of models for such event sequences -- summary Markov models -- where the probability of observing an event type depends only on a summary of historical occurrences of its influencing set of event types. This Markov model family is motivated by Granger causal models for time series, with the important distinction that only one event can occur in a position in an event sequence. We show that a unique minimal influencing set exists for any set of event types of interest and choice of summary function, formulate two novel models from the general family that represent specific sequence dynamics, and propose a greedy search algorithm for learning them from event sequence data. We conduct an experimental investigation comparing the proposed models with relevant baselines, and illustrate their knowledge acquisition and discovery capabilities through case studies involving sequences from text.
Debarun Bhattacharjya, Saurabh Sihag, Oktie Hassanzadeh, Liza Bialik
null
null
2,022
ijcai
Robustness Guarantees for Credal Bayesian Networks via Constraint Relaxation over Probabilistic Circuits
null
In many domains, worst-case guarantees on the performance (e.g. prediction accuracy) of a decision function subject to distributional shifts and uncertainty about the environment are crucial. In this work we develop a method to quantify the robustness of decision functions with respect to credal Bayesian networks, formal parametric models of the environment where uncertainty is expressed through credal sets on the parameters. In particular, we address the maximum marginal probability (MARmax) problem, that is, determining the greatest probability of an event (such as misclassification) obtainable for parameters in the credal set. We develop a method to faithfully transfer the problem into a constrained optimization problem on a probabilistic circuit. By performing a simple constraint relaxation, we show how to obtain a guaranteed upper bound on MARmax in linear time in the size of the circuit. We further theoretically characterize this constraint relaxation in terms of the original Bayesian network structure, which yields insight into the tightness of the bound. We implement the method and provide experimental evidence that the upper bound is often near tight and demonstrates improved scalability compared to other methods.
Hjalmar Wijk, Benjie Wang, Marta Kwiatkowska
null
null
2,022
ijcai
Ancestral Instrument Method for Causal Inference without Complete Knowledge
null
Unobserved confounding is the main obstacle to causal effect estimation from observational data. Instrumental variables (IVs) are widely used for causal effect estimation when there exist latent confounders. With the standard IV method, when a given IV is valid, unbiased estimation can be obtained, but the validity requirement on a standard IV is strict and untestable. Conditional IVs have been proposed to relax the requirement of standard IVs by conditioning on a set of observed variables (known as a conditioning set for a conditional IV). However, the criterion for finding a conditioning set for a conditional IV needs a directed acyclic graph (DAG) representing the causal relationships of both observed and unobserved variables. This makes it challenging to discover a conditioning set directly from data. In this paper, by leveraging maximal ancestral graphs (MAGs) for causal inference with latent variables, we study the graphical properties of ancestral IVs, a type of conditional IVs using MAGs, and develop the theory to support data-driven discovery of the conditioning set for a given ancestral IV in data under the pretreatment variable assumption. Based on the theory, we develop an algorithm for unbiased causal effect estimation with a given ancestral IV and observational data. Extensive experiments on synthetic and real-world datasets demonstrate the performance of the algorithm in comparison with existing IV methods.
Debo Cheng, Jiuyong Li, Lin Liu, Jiji Zhang, Thuc Duy Le, Jixue Liu
null
null
2,022
ijcai
Hidden 1-Counter Markov Models and How to Learn Them
null
We introduce hidden 1-counter Markov models (H1MMs) as an attractive sweet spot between standard hidden Markov models (HMMs) and probabilistic context-free grammars (PCFGs). Both HMMs and PCFGs have a variety of applications, e.g., speech recognition, anomaly detection, and bioinformatics. PCFGs are more expressive than HMMs, e.g., they are more suited for studying protein folding or natural language processing. However, they suffer from slow parameter fitting, which is cubic in the observation sequence length. The same process for HMMs is just linear using the well-known forward-backward algorithm. We argue that by adding to each state of an HMM an integer counter, e.g., representing the number of clients waiting in a queue, brings its expressivity closer to PCFGs. At the same time, we show that parameter fitting for such a model is computationally inexpensive: it is bi-linear in the length of the observation sequence and the maximal counter value, which grows slower than the observation length. The resulting model of H1MMs allows us to combine the best of both worlds: more expressivity with faster parameter fitting.
Mehmet Kurucan, Mete Özbaltan, Sven Schewe, Dominik Wojtczak
null
null
2,022
ijcai