doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1706.07269 | 233 | [167] J. Susskind, K. Maurer, V. Thakkar, D. L. Hamilton, J. W. Sherman, Perceiving individuals and groups: expectancies, dispositional inferences, and causal attributions, Journal of Personality and Social Psychology 76 (2) (1999) 181.
[168] W. R. Swartout, J. D. Moore, Explanation in second generation expert systems, in: Second 65
Generation Expert Systems, Springer, 543â585, 1993.
[169] P. E. Tetlock, R. Boettger, Accountability: a social magniï¬er of the dilution eï¬ect, Journal of Personality and Social Psychology 57 (3) (1989) 388.
[170] P. E. Tetlock, J. S. Learner, R. Boettger, The dilution eï¬ect: judgemental bias, conversational convention, or a bit of both?, European Journal of Social Psychology 26 (1996) 915â934.
[171] P. Thagard, Explanatory coherence, Behavioral and Brain Sciences 12 (03) (1989) 435â467. [172] T. Trabasso, J. Bartolone, Story understanding and counterfactual reasoning, Journal of Experimental Psychology: Learning, Memory, and Cognition 29 (5) (2003) 904. | 1706.07269#233 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 234 | [173] A. Tversky, D. Kahneman, Extensional versus intuitive reasoning: The conjunction fallacy in probability judgment, Psychological Review 90 (4) (1983) 293.
[174] K. Uttich, T. Lombrozo, Norms inform mental state ascriptions: A rational explanation for the side-eï¬ect eï¬ect, Cognition 116 (1) (2010) 87â100.
[175] J. Van Bouwel, E. Weber, Remote causes, bad explanations?, Journal for the Theory of Social Behaviour 32 (4) (2002) 437â449.
[176] B. C. Van Fraassen, The pragmatics of explanation, American Philosophical Quarterly 14 (2) (1977) 143â150.
[177] N. Vasilyeva, D. A. Wilkenfeld, T. Lombrozo, Goals Aï¬ect the Perceived Quality of Explanations., in: D. C. Noelle, R. Dale, A. S. Warlaumont, J. Yoshimi, T. Matlock, C. D. Jennings, P. P. Maglio (Eds.), Proceedings of the 37th Annual Conference of the Cognitive Science Society, Cognitive Science Society, 2469â2474, 2015. | 1706.07269#234 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 235 | [178] F. B. von der Osten, M. Kirley, T. Miller, The minds of many: opponent modelling in a stochastic game, in: Proceedings of the 25th International Joint Conference on Artiï¬cial Intelligence (IJCAI), AAAI Press, 3845â3851, 2017.
[179] G. H. Von Wright, Explanation and understanding, Cornell University Press, 1971. [180] D. Walton, A new dialectical theory of explanation, Philosophical Explorations 7 (1) (2004) 71â89. [181] D. Walton, Examination dialogue: An argumentation framework for critically questioning an
expert opinion, Journal of Pragmatics 38 (5) (2006) 745â777.
[182] D. Walton, Dialogical Models of Explanation, in: Proceedings of the International Explanation- Aware Computing (ExaCt) workshop, 1â9, 2007.
[183] D. Walton, A dialogue system speciï¬cation for explanation, Synthese 182 (3) (2011) 349â374. [184] D. N. Walton, Logical Dialogue â Games and Fallacies, University Press of America, Lanham,
Maryland, 1984. | 1706.07269#235 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.07269 | 236 | Maryland, 1984.
[185] J. Weiner, BLAH, a system which explains its reasoning, Artiï¬cial intelligence 15 (1-2) (1980) 19â48.
[186] D. S. Weld, G. Bansal, Intelligible Artiï¬cial Intelligence, arXiv e-prints 1803.04263, URL https: //arxiv.org/pdf/1803.04263.pdf.
[187] A. Wendt, On constitution and causation in international relations, Review of International Studies 24 (05) (1998) 101â118.
[188] D. A. Wilkenfeld, T. Lombrozo, Inference to the best explanation (IBE) versus explaining for the best inference (EBI), Science & Education 24 (9-10) (2015) 1059â1077.
[189] J. J. Williams, T. Lombrozo, B. Rehder, The hazards of explanation: Overgeneralization in the face of exceptions, Journal of Experimental Psychology: General 142 (4) (2013) 1006.
[190] M. Winikoï¬, Debugging Agent Programs with Why?: Questions, in: Proceedings of the 16th Conference on Autonomous Agents and MultiAgent Systems, AAMAS â17, IFAAMAS, 251â259, 2017. | 1706.07269#236 | Explanation in Artificial Intelligence: Insights from the Social Sciences | There has been a recent resurgence in the area of explainable artificial
intelligence as researchers and practitioners seek to make their algorithms
more understandable. Much of this research is focused on explicitly explaining
decisions or actions to a human observer, and it should not be controversial to
say that looking at how humans explain to each other can serve as a useful
starting point for explanation in artificial intelligence. However, it is fair
to say that most work in explainable artificial intelligence uses only the
researchers' intuition of what constitutes a `good' explanation. There exists
vast and valuable bodies of research in philosophy, psychology, and cognitive
science of how people define, generate, select, evaluate, and present
explanations, which argues that people employ certain cognitive biases and
social expectations towards the explanation process. This paper argues that the
field of explainable artificial intelligence should build on this existing
research, and reviews relevant papers from philosophy, cognitive
psychology/science, and social psychology, which study these topics. It draws
out some important findings, and discusses ways that these can be infused with
work on explainable artificial intelligence. | http://arxiv.org/pdf/1706.07269 | Tim Miller | cs.AI | null | null | cs.AI | 20170622 | 20180815 | [
{
"id": "1606.03490"
}
] |
1706.06708 | 0 | 8 1 0 2
r p A 7 2 ] C C . s c [
2 v 8 0 7 6 0 . 6 0 7 1 : v i X r a
Solving the Rubikâs Cube Optimally is NP-complete
Erik D. Demaineâ Sarah Eisenstatâ Mikhail Rudoyâ
# Abstract
In this paper, we prove that optimally solving an n à n à n Rubikâs Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an nÃnÃn Rubikâs Cube with missing stickers is NP-complete. We prove this result ï¬rst for the simpler case of the Rubikâs Squareâan n à n à 1 generalization of the Rubikâs Cubeâand then proceed with a similar but more complicated proof for the Rubikâs Cube case. Our results hold both when the goal is make the sides monochromatic and when the goal is to put each sticker into a speciï¬c location.
1
# 1 Introduction | 1706.06708#0 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 0 | # Learnable pooling with Context Gating for video classiï¬cation
# Antoine Miech, Ivan Laptev and Josef Sivic https://github.com/antoine77340/LOUPE
AbstractâCurrent methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We ï¬rst explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classiï¬cation. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
8 1 0 2 r a M 5 ]
Index TermsâMachine learning, Computer vision, Neural networks, Video analysis.
# 1 INTRODUCTION | 1706.06905#0 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 0 | 7 1 0 2 n u J 1 2 ] O R . s c [
1 v 7 2 9 6 0 . 6 0 7 1 : v i X r a
# Combined Task and Motion Planning as Classical AI Planning
Jonathan Ferrer-Mestres Universitat Pompeu Fabra Barcelona, Spain [email protected]
Guillem Franc`es Universitat Pompeu Fabra Barcelona, Spain [email protected]
Hector Geffner ICREA & Universitat Pompeu Fabra Barcelona, Spain [email protected]
November, 2016 | 1706.06927#0 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06708 | 1 | 1
# 1 Introduction
The Rubikâs Cube is an iconic puzzle in which the goal is to rearrange the stickers on the outside of a 3 à 3 à 3 cube so as to make each face monochromatic by rotating 1 à 3 à 3 (or 3 à 1 à 3 or 3 à 3 à 1) slices. In some versions where the faces show pictures instead of colors, the goal is to put each sticker into a speciï¬c location. The 3 à 3 à 3 Rubikâs Cube can be generalized to an n à n à n cube in which a single move is a rotation of a 1 à n à n slice. We can also consider the generalization to an n à n à 1 ï¬gure. In this simpler puzzle, called the n à n Rubikâs Square, the allowed moves are ï¬ips of n à 1 à 1 rows or 1 à n à 1 columns. These two generalizations were introduced in [3]. | 1706.06708#1 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 1 | 8 1 0 2 r a M 5 ]
Index TermsâMachine learning, Computer vision, Neural networks, Video analysis.
# 1 INTRODUCTION
] V C . s c [ 2 v 5 0 9 6 0 . 6 0 7 1 : v i X r a
Groundtruth: Barcebue - Grilling - Machine - Food - Wood - Cooking Top 6 scores:Food (97.5%) - Wood (74.9%) - Barbecue (60.0%) - Cooking (50.1%) - Barbecue grill (27.9%) - Table (27.4%) Groundtruth: Tree - Christmas Tree - Christmas Decoration - Christmas Top 6 scores: Christmas (87.7%) - Christmas decoration (40.1%) - Origami (23.0%) - Paper (15.2%) - Tree (13.9%) - Christmas Tree (7.4%)
Understanding and recognizing video content is a major chal- lenge for numerous applications including surveillance, personal assistance, smart homes, autonomous driving, stock footage search and sports video analysis. In this work, we address the problem of multi-label video classiï¬cation for user-generated videos on the Internet. The analysis of such data involves several challenges. Internet videos have a great variability in terms of content and quality (see Figure 1). Moreover, user-generated labels are typi- cally incomplete, ambiguous and may contain errors. | 1706.06905#1 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 1 | November, 2016
AbstractâPlanning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and ï¬lls up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over ï¬nite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufï¬cient number of conï¬gurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of ï¬nding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
# I. INTRODUCTION | 1706.06927#1 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 1 | ABSTRACT Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embed- ding&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer per- ceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length repre- sentation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding&MLP methods to capture userâs diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this chal- lenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with re- spect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation | 1706.06978#1 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 2 | The overall purpose of this paper is to address the computational diï¬culty of optimally solving these puzzles. In particular, consider the decision problem which asks for a given puzzle conï¬gura- tion whether that puzzle can be solved in a given number of moves. We show that this problem is NP-complete for the n à n Rubikâs Square and for the n à n à n Rubikâs Cube under two diï¬erent move models. These results close a problem that has been repeatedly posed as far back as 1984 [1, 7, 4] and has until now remained open [6].
In Section 2, we formally introduce the decision problems regarding Rubikâs Squares and Rubikâs Cubes whose complexity we will analyze. Then in Section 3, we introduce the variant of the Hamiltonicity problem that we will reduce fromâPromise Cubical Hamiltonian Pathâand prove this problem to be NP-hard. Next, we prove that the problems regarding the Rubikâs Square are NP-complete in Section 4 by reducing from Promise Cubical Hamiltonian Path. After that, we apply the same ideas in Section 5 to a more complicated proof of NP-hardness for the problems regarding the Rubikâs Cube. Finally, we discuss possible next steps in Section 6. | 1706.06708#2 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 2 | Current approaches for video analysis typically represent videos by features extracted from consecutive frames, followed by feature aggregation over time. Example methods for feature extraction include deep convolutional neural networks (CNNs) pre-trained on static images [1], [2], [3], [4]. Representations of motion and appearance can be obtained from CNNs pre-trained for video frames and short video clips [5], [6], as well as hand-crafted video features [7], [8], [9]. Other more advanced models employ hierarchical spatio-temporal convolutional architectures [5], [10], [11], [12], [13], [14] to both extract and temporally aggregate video features at the same time.
Common methods for temporal feature aggregation include simple averaging or maximum pooling as well as more sophis- ticated pooling techniques such as VLAD [15] or more recently recurrent models (LSTM [16] and GRU [17]). These techniques, however, may be suboptimal. Indeed, simple techniques such as average or maximum pooling may become inaccurate for long sequences. Recurrent models are frequently used for temporal aggregation of variable-length sequences [18], [19] and often outperform simpler aggregation methods, however, their training remains cumbersome. As we show in Section 5, training recurrent | 1706.06905#2 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 2 | # I. INTRODUCTION
Planning problems in robotics involve robots that move around, while manipulating objects and avoiding collisions. These problems are thought to be outside the scope of standard AI planners, and are normally addressed through a combina- tion of two types of planners: task planners that handle the high-level, symbolic reasoning part, and motion planners that handle motion and geometrical constraints [1, 11, 2, 33, 23, 14]. These two components, however, are not independent, and hence, by giving one of the two planners a secondary role in the search for plans, approaches based on task and motion decomposition tend to be ineffective and result in lots of backtracks [16].
In recent years, there have been proposals aimed at address- ing this combinatorial problem by exploiting the efï¬ciency of modern classical AI planners. In one case, the spatial constraints are taken into account as part of a goal-directed replanning process where optimistic assumptions about free space are incrementally reï¬ned until plans are obtained that can be executed in the real environment [31]. In another approach [7], geometrical information is used to update the
heuristic used in the FF planner [13]. Other recent recent ap- proaches appeal instead to SMT solvers suitable for addressing both task planning and the geometrical constraints [26, 3]. | 1706.06927#2 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 2 | This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of pro- posed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully de- ployed in the online display advertising system in Alibaba, serving the main traffic. | 1706.06978#2 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 3 | âMIT Computer Science and Artiï¬cial Intelligence Laboratory, 32 Vassar St., Cambridge, MA 02139, USA, [email protected]
â MIT Computer Science and Artiï¬cial Intelligence Laboratory, 32 Vassar St., Cambridge, MA 02139, USA, [email protected]. Now at Google Inc.
1
# 2 Rubikâs Cube and Rubikâs Square problems
# 2.1 Rubikâs Square
We begin with a simpler model based on the Rubikâs Cube which we will refer to as the Rubikâs Square. In this model, a puzzle consists of an n à n array of unit cubes, called cubies to avoid ambiguity. Every cubie face on the outside of the puzzle has a colored (red, blue, green, white, yellow, or orange) sticker. The goal of the puzzle is to use a sequence of moves to rearrange the cubies such that each face of the puzzle is monochromatic in a diï¬erent color. A move consists of ï¬ipping a single row or column in the array through space via a rotation in the long direction as demonstrated in Figure 1. | 1706.06708#3 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 3 | Fig. 1: Two example videos from the Youtube-8M V2 dataset together with the ground truth and top predicted labels. Predictions colored as green are labels from the groundtruth annotation.
models requires relatively large amount of data. Moreover, re- current models can be sub-optimal for processing of long video sequences during GPU training. It is also not clear if current models for sequential aggregation are well-adapted for video representation. Indeed, our experiments with training recurrent models using temporally-ordered and randomly-ordered video frames show similar results.
A. Miech, I. Laptev and J. Sivic are with Inria, WILLOW, Departement dâInformatique de lâ ´Ecole Normale Sup´erieure, PSL Research University, ENS/INRIA/CNRS UMR 8548, Paris, France E-mail: {antoine.miech, ivan.laptev, josef.sivic}@inria.fr J. Sivic is also with Czech Institute of Informatics, Robotics and Cybernet- ics, Czech Technical University in Prague.
Another research direction is to exploit traditional orderless aggregation techniques based on clustering approaches such as Bag-of-visual-words [20], [21], Vector of Locally aggregated | 1706.06905#3 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 3 | The work in this paper also aims at exploiting the efï¬ciency of modern classical AI planning algorithms but departs from prior work in two ways. First, task and motion problems are fully compiled into classical planning problems so that the classical plans are valid robot plans. Motion planners and collision checkers [17] are used in the compilation but not in the solution of the classical problem. The compilation is thus sound, and probabilistically complete in the sense that robot plans map into classical plans provided that the number of sampled robot conï¬gurations is sufï¬cient. In order to make the compiled problems compact, we move away from the standard PDDL planning language and appeal instead to Functional STRIPS [8], a planning language that is expressive enough to accommodate procedures and state constraints. State con- straints are formulas that are forced to be true in every reach- able state, and thus represent implicit action preconditions. In the CTMP planning encoding, state constraints are used to rule out spatial overlaps. Procedures are used in turn for testing and updating robot and object conï¬gurations, and their planning-time execution is made efï¬cient by precompiling suitable tables. The size and computation of these tables is also efï¬cient, and allows us to deal with 3D scenarios involving tens of objects and a PR2 robot simulated in Gazebo [15]. | 1706.06927#3 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 3 | # CCS CONCEPTS ⢠Information systems â Display advertising; Recommender systems;
# KEYWORDS Click-Through Rate Prediction, Display Advertising, E-commerce
1 INTRODUCTION In cost-per-click (CPC) advertising system, advertisements are ranked by the eCPM (effective cost per mille), which is the product of the bid price and CTR (click-through rate), and CTR needs to be predicted by the system. Hence, the performance of CTR prediction model has a direct impact on the final revenue and plays a key role in the advertising system. Modeling CTR prediction has received much attention from both research and industry community.
These methods follow a similar Embedding&MLP paradigm: large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into fully connected layers (also known as multilayer perceptron, MLP) to learn the nonlinear relations among features. Compared with commonly used logistic regression model [19], these deep learning methods can reduce a lot of feature engineering jobs and enhance the model capability greatly. For simplicity, we name these methods Embedding&MLP in this paper, which now have become popular on CTR prediction task. | 1706.06978#3 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 4 | # dAddd
Figure 1: A single move in an example 6 Ã 6 Rubikâs Square.
We are concerned with the following decision problem:
Problem 1. The Rubikâs Square problem has as input an n à n Rubikâs Square conï¬guration and a value k. The goal is to decide whether a Rubikâs Square in conï¬guration C can be solved in k moves or fewer.
Note that this type of puzzle was previously introduced in [3] as the n à n à 1 Rubikâs Cube. In that paper, the authors showed that deciding whether it is possible to solve the n à n à 1 Rubikâs Cube in a given number of moves is NP-complete when the puzzle is allowed to have missing stickers (and the puzzle is considered solved if each face contains stickers of only one color).
# 2.2 Rubikâs Cube | 1706.06708#4 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 4 | 1
Descriptors (VLAD) [15] or Fisher Vectors [22]. It has been recently shown that integrating VLAD as a differentiable module in a neural network can signiï¬cantly improve the aggregated rep- resentation for the task of place retrieval [23]. This has motivated us to integrate and enhance such clustering-based aggregation techniques for the task of video representation and classiï¬cation.
Contributions. In this work we make the following contributions: (i) we introduce a new state-of-the-art architecture aggregating video and audio features for video classiï¬cation, (ii) we introduce the Context Gating layer, an efï¬cient non-linear unit for modeling interdependencies among network activations, and (iii) we ex- perimentally demonstrate beniï¬ts of clustering-based aggregation techniques over LSTM and GRU approaches for the task of video classiï¬cation. | 1706.06905#4 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 4 | The second departure from prior work is in the classical planning algorithm itself. Previous approaches have built upon classical planners such as FF and LAMA [13, 28], yet such planners cannot be used with expressive planning languages that feature functions and state constraints. The Functional STRIPS planner FS [5] handles functions and can derive and use heuristics, yet these heuristics are expensive to compute and not always cost-effective to deal with state constraints. For these reasons, we build instead on a different class of planning algorithm, called best-ï¬rst width search (BFWS), that has been recently shown to produce state-of-the-art results over classical planning benchmarks [20]. An advantage of BFWS is that it relies primarily on exploratory novelty-based measures, extended with simple goal-directed heuristics. For this work, we adapt BFWS to work with Functional STRIPS with state constraints, replacing a Functional STRIPS heuristic that is
expensive and does not take state constraints into account by a fast and simple heuristic suited to pick and place tasks. | 1706.06927#4 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 4 | However, the user representation vector with a limited dimen- sion in Embedding&MLP methods will be a bottleneck to express userâs diverse interests. Take display advertising in e-commerce site as an example. Users might be interested in different kinds of goods simultaneously when visiting the e-commerce site. That is to say, user interests are diverse. When it comes to CTR prediction task, user interests are usually captured from user behavior data. Embedding&MLP methods learn the representation of all interests for a certain user by transforming the embedding vectors of user behaviors into a fixed-length vector, which is in an euclidean space where all usersâ representation vectors are. In other words, diverse interests of the user are compressed into a fixed-length vector, which limits the expressive ability of Embedding&MLP methods. To make the representation capable enough for expressing userâs diverse interests, the dimension of the fixed-length vector needs to be largely expanded. Unfortunately, it will dramatically enlarge the size of learning parameters and aggravate the risk of overfitting under limited data. Besides, it adds the burden of computation and storage, which may not be tolerated for an industrial online system. On | 1706.06978#4 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 5 | Next consider the Rubikâs Cube puzzle. An n à n à n Rubikâs Cube is a cube consisting of n3 unit cubes called cubies. Every face of a cubie that is on the exterior of the cube has a colored (red, blue, green, white, yellow, or orange) sticker. The goal of the puzzle is to use a sequence of moves to reconï¬gure the cubies in such a way that each face of the cube ends up monochromatic in a diï¬erent color. A move count metric is a convention for counting moves in a Rubikâs Cube. Several common move count metrics for Rubikâs Cubes are listed in [8]. As discussed in [2], however, many common move count metrics do not easily generalize to n > 3 or are not of any theoretical interest. In this paper, we will restrict our attention to two move count metrics called the Slice Turn Metric and the Slice Quarter Turn Metric. Both of these metrics use the same type of motion to deï¬ne a move. Consider the subdivision of the Rubikâs Cubeâs volume into n slices of dimension 1 à n à n (or n à 1 à n or n à n | 1706.06708#5 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 5 | Results. We evaluate our method on the large-scale multi-modal Youtube-8M V2 dataset containing about 8M videos and 4716 unique tags. We use pre-extracted visual and audio features provided with the dataset [19] and demonstrate improvements obtained with the Context Gating as well as by the combination of learnable poolings. Our method obtains top performance, out of more than 650 teams, in the Youtube-8M Large-Scale Video Understanding challenge1. Compared to the common recurrent models, our models are faster to train and require less training data. Figure 1 illustrates some qualitative results of our method.
2 RELATED WORK This work is related to previous methods for video feature extrac- tion, aggregation and gating reviewed below.
# 2.1 Feature extraction | 1706.06905#5 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 5 | expensive and does not take state constraints into account by a fast and simple heuristic suited to pick and place tasks.
Given that classical AI planning is planning over ï¬nite and discrete state spaces with a known initial state, deterministic actions, and a goal state to be reached [9], it is not surprising that the combined task and motion planning can be fully com- piled into a classical planning problem once the continuous conï¬guration space is suitably discretized or sampled [17]. Moreover, modern classical planners scale up very well and like SAT or SMT solvers are largely unaffected by the size of the state space. If this approach has not been taken before, it is thus not due to the lack of efï¬ciency of such planners but due to the limitations of the languages that they support [24]. Indeed, there is no way to compile non-overlap physical con- straints into PDDL in compact form. We address this limitation by using a target language for the compilation that makes use of state constraints to rule out physical overlaps during motions, and procedures for testing and updating physical conï¬gurations. This additional expressive power prevents the use of standard heuristic search planning algorithms [13, 28] but is compatible with a more recent class of width-based planning methods that are competitive with state-of-the-art heuristic search approaches [21, 20]. | 1706.06927#5 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 5 | the risk of overfitting under limited data. Besides, it adds the burden of computation and storage, which may not be tolerated for an industrial online system. On the other hand, it is not necessary to compress all the diverse interests of a certain user into the same vector when predicting a candidate ad because only part of userâs interests will influence his/her action (to click or not to click). For example, a female swim- mer will click a recommended goggle mostly due to the bought of bathing suit rather than the shoes in her last weekâs shopping list. Motivated by this, we propose a novel model: Deep Interest Network (DIN), which adaptively calculates the representation vec- tor of user interests by taking into consideration the relevance of historical behaviors given a candidate ad. By introducing a local activation unit, DIN pays attentions to the related user interests by soft-searching for relevant parts of historical behaviors and takes a weighted sum pooling to obtain the representation of user interests with respect to the candidate ad. Behaviors with higher relevance to the candidate ad get higher activated weights and dominate the representation of user interests. We visualize this phenomenon in the experiment section. In | 1706.06978#5 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 6 | Consider the subdivision of the Rubikâs Cubeâs volume into n slices of dimension 1 à n à n (or n à 1 à n or n à n à 1). In the Slice Turn Metric (STM), a move is a rotation of a single slice by any multiple of 90â¦. Similarly, in the Slice Quarter Turn Metric (SQTM), a move is a rotation of a single slice by an angle of 90⦠in either direction. An example SQTM move is shown in Figure 2. | 1706.06708#6 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 6 | 2 RELATED WORK This work is related to previous methods for video feature extrac- tion, aggregation and gating reviewed below.
# 2.1 Feature extraction
Successful hand-crafted representations [7], [8], [9] are based on local histograms of image and motion gradient orientations extracted along dense trajectories [9], [24]. More recent methods extract deep convolutional neural network activations computed from individual frames or blocks of frames using spatial [6], [25], [26], [27] or spatio-temporal [5], [10], [11], [12], [13], [14] convolutions. Convolutional neural networks can be also applied separately on the appearance channel and the pre-computed mo- tion ï¬eld channel resulting in the, so called, two-stream represen- tations [6], [11], [14], [26], [28]. As our work is motivated by the Youtube-8M large-scale video understanding challenge [19], we will assume for the rest of the paper that features are provided (more details are provided in Section 5). This work mainly focuses on the temporal aggregation of given features.
# 2.2 Feature aggregation | 1706.06905#6 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 6 | The paper is organized as follows. We describe ï¬rst the plan- ning language and how the combined task and motion planning problem is modeled as a classical problem. We present then the preprocessing involved, the planning algorithm, and the empirical results. Videos displaying some of the problems and plans can be seen at bit.ly/2fnXeAd.
# II. PLANNING LANGUAGE
For making a general use of functions and procedures in the planning encoding, we use Functional STRIPS, a logical extension of the STRIPS planning language [8]. Functional STRIPS is a general modeling language for classical planning that is based on the variable-free fragment of first-order- logic where action a have preconditions Pre(a) and effects f(t) := t', where the precondition Pre(a) and goals G, are variable-free, first-order formulas, and f(t) and tâ are terms with f being a fluent symbol. Functional STRIPS assumes that fluent symbols, namely, those symbols whose denotation may change as a result of the actions, are all function symbols. Constant, functional and relational (predicate) symbols whose denotation does not change are called fixed symbols, and their denotation must be given either extensionally by enumeration, or intentionally by means of procedures as in [4] [25]. | 1706.06927#6 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06905 | 7 | # 2.2 Feature aggregation
Video features are typically extracted from individual frames or short video clips. The remaining question is: how to aggregate video features over the entire and potentially long video? One way to achieve this is to employ recurrent neural networks, such as long short-term memory (LSTM) [16] or gated recurrent unit (GRU) [17]), on top of the extracted frame-level features to capture the temporal structure of video into a single representation [18], [29], [30], [31], [32]. Hierarchical spatio-temporal convolution architectures [5], [10], [11], [12], [13], [14] can also be viewed
1. https://www.kaggle.com/c/youtube8m
VIDEO FEATURES AUDIO FEATURES INPUT FEATURES FEATURES POOLING CLASSIFICATION
Fig. 2: Overview of our network architecture for video classiï¬ca- tion (the âLate Concatâ variant). FC denotes a Fully-Connected layer. MoE denotes the Mixture-of-Experts classiï¬er [19]. | 1706.06905#7 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 7 | Terms, atoms, and formulas are defined from constant, function, and relational symbols in the standard first-order- logic way, except that in order for the representation of states to be finite and compact, the symbols, and hence the terms, are typed. A type is given by a finite set of fixed constant symbols. The terms f(c) where f is a fluent symbol and c is a tuple of fixed constant symbols are called state variables, as the state is just the assignment of values to such âvariablesâ. As an example, the action of moving a block b onto another block bâ can be expressed by an action move(b,bâ) with | 1706.06927#7 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 7 | Recently, inspired by the success of deep learning in computer vision [14] and natural language processing [1], deep learning based methods have been proposed for CTR prediction task [3, 4, 21, 26].
user interests varies over different ads, which improves the expres- sive ability of model under limited dimension and enables DIN to better capture userâs diverse interests.
Training industrial deep networks with large scale sparse fea- tures is of great challenge. For example, SGD based optimization methods only update those parameters of sparse features appearing in each mini-batch. However, adding with traditional â2 regular- ization, the computation turns to be unacceptable, which needs to calculate L2-norm over the whole parameters (with size scaling up to billions in our situation) for each mini-batch. In this paper, we develop a novel mini-batch aware regularization where only parameters of non-zero features appearing in each mini-batch par- ticipate in the calculation of L2-norm, making the computation acceptable. Besides, we design a data adaptive activation function, which generalizes commonly used PReLU[12] by adaptively adjust- ing the rectified point w.r.t. distribution of inputs and is shown to be helpful for training industrial networks with sparse features. The contributions of this paper are summarized as follows: | 1706.06978#7 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 8 | Figure 2: A single slice rotation in an example 7 Ã 7 Ã 7 Rubikâs Cube.
Problem 2. The STM/SQTM Rubikâs Cube problem takes as input a conï¬guration of a Ru- bikâs Cube together with a number k. The goal is to decide whether a Rubikâs Cube in conï¬guration C can be solved in at most k STM/SQTM moves.
# 2.3 Notation
Next we deï¬ne some notation for dealing with the Rubikâs Cube and Rubikâs Square problems. | 1706.06708#8 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 8 | as a way to both extract and aggregate temporal features at the same time. Other methods capture only the distribution of features in the video, not explicitly modeling their temporal ordering. The simplest form of this approach is the average or maximum pooling of video features [33] over time. Other commonly used methods include bag-of-visual-words [20], [21], Vector of Locally aggre- gated Descriptors (VLAD) [15] or Fisher Vector [22] encoding. Application of these techniques to video include [7], [8], [9], [34], [35]. Typically, these methods [31], [36] rely on an unsupervised learning of the codebook. However, the codebook can also be learned in a discriminative manner [34], [37], [38] or the entire encoding module can be included within the convolutional neural network architecture and trained in the end-to-end manner [23]. This type of end-to-end trainable orderless aggregation has been recently applied to video frames in [26]. Here we extend this work by aggregating visual and audio inputs, and also investigate multiple orderless aggregations.
# 2.3 Gating | 1706.06905#8 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 8 | precondition [clear(b) = true Aclear(bâ) = true}, and effects loc(b) := b! and clear(loc(b)) := true. In this case, the terms clear(b) and loc(b) for block 6 stand for state variables. clear (loc(b)) is a valid term, but not a state variable, as loc(b) is not a fixed constant symbol. The denotation of the term clear (loc(b)) in a state is a function of the Joc(b) and clear (b) state variables; whenever loc(b) = bâ holds in a state, the value of clear(loc(b)) will be that of the state variable clear(bâ). | 1706.06927#8 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 8 | ⢠We point out the limit of using fixed-length vector to express userâs diverse interests and design a novel deep interest network (DIN) which introduces a local activation unit to adaptively learn the representation of user interests from historical behaviors w.r.t. given ads. DIN can improve the expressive ability of model greatly and better capture the diversity characteristic of user interests.
⢠We develop two novel techniques to help training industrial deep networks: i) a mini-batch aware regularizer, which saves heavy computation of regularization on deep networks with huge number of parameters and is helpful for avoiding overfitting, ii) a data adaptive activation function, which generalizes PReLU by considering the distribution of inputs and shows well performance.
⢠We conduct extensive experiments on both public and Al- ibaba datasets. Results verify the effectiveness of proposed DIN and training techniques. Our code1 is publicly avail- able. The proposed approaches have been deployed in the commercial display advertising system in Alibaba, one of worldâs largest advertising platform, contributing significant improvement to the business.
In this paper we focus on the CTR prediction modeling in the scenario of display advertising in e-commerce industry. Methods discussed here can be applied in similar scenarios with rich user behaviors, such as personalized recommendation in e-commerce sites, feeds ranking in social networks etc. | 1706.06978#8 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 9 | # 2.3 Notation
Next we deï¬ne some notation for dealing with the Rubikâs Cube and Rubikâs Square problems.
To begin, we need a way to refer to cubies and stickers. For this purpose, we orient the puzzle to be axis-aligned. In the case of the Rubikâs Square we arrange the n à n array of cubies in the x and y directions and we refer to a cubie by stating its x and y coordinates. In the case of the Rubikâs Cube, we refer to a cubie by stating its x, y, and z coordinates. To refer to a sticker in either puzzle, we need only specify the face on which that sticker resides (e.g. âtopâ or â+zâ) and also the two coordinates of the sticker along the surface of the face (e.g. the x and y coordinates for a sticker on the +z face). | 1706.06708#9 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 9 | # 2.3 Gating
Gating mechanisms allow multiplicative interaction between a given input feature X and a gate vector with values in between 0 and 1. They are commonly used in recurrent neural network models such as LSTM [16] and GRU [17] but have so far not been exploited in conjunction with other non-temporal aggrega- tion strategies such as Fisher Vectors (FV), Vector of Locally Aggregated Descriptors (VLAD) or bag-of-visual-words (BoW). Our work aims to ï¬ll this gap and designs a video classiï¬ca- tion architecture combining non-temporal aggregation with gating mechanisms. One of the motivations for this choice is the recent Gated Linear Unit (GLU) [39], which has demonstrated signiï¬cant improvements in natural language processing tasks.
Our gating mechanism initially reported in [40] is also related to the parallel work on Squeeze-and-Excitation architectures [41], that has suggested gated blocks for image classiï¬cation tasks and have demonstrated excellent performance on the ILSVRC 2017 image classiï¬cation challenge.
# 3 NETWORK ARCHITECTURE | 1706.06905#9 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 9 | Formally, a state is an assignment of values to the state variables that determines a denotation (value) for every term and formula in the language. The denotation of a symbol or term t in the state s is written as ts, while the denotation rs of terms made up of ï¬xed symbols only and which does not depend on the state, is written as râ. By default, non- standard ï¬xed constant symbols c, which usually stand for object names, are assumed to denote themselves, meaning that câ = c. The states s just encode the denotation f s of the functional ï¬uent symbols, which as the types of their arguments are all ï¬nite, can be represented as the value [f (c)]s of a ï¬nite set of state variables. The denotation [f (t)]s of a term f (t) for an arbitrary tuple of terms t, is then given by the value [f (c)]s of the state variable f (c) where câ = ts. The denotation es of all terms, atoms, and formulas e in the state s follows in the standard way. | 1706.06927#9 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 9 | The rest of the paper is organized as follows. We discuss related work in section 2 and introduce the background about characteristic of user behavior data in display advertising system of e-commerce site in section 3. Section 4 and 5 describe in detail the design of DIN model as well as two proposed training techniques. We present experiments in section 6 and conclude in section 7.
2 RELATEDWORK The structure of CTR prediction model has evolved from shallow to deep. At the same time, the number of samples and the dimension
1Experiment code on two public datasets https://github.com/zhougr1993/DeepInterestNetwork is available on GitHub:
2
of features used in CTR model have become larger and larger. In order to better extract feature relations to improve performance, several works pay attention to the design of model structure.
As a pioneer work, NNLM [2] learns distributed representation for each word, aiming to avoid curse of dimension in language modeling. This method, often referred to as embedding, has inspired many natural language models and CTR prediction models that need to handle large-scale sparse inputs.
LS-PLM [9] and FM [20] models can be viewed as a class of net- works with one hidden layer, which first employs embedding layer on sparse inputs and then imposes specially designed transforma- tion functions for target fitting, aiming to capture the combination relations among features. | 1706.06978#9 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 10 | If n = 2a + 1 is odd, then we will let the coordinates of the cubies in each direction range over the set {âa, â(a â 1), . . . , â1, 0, 1, . . . , a â 1, a}. This is equivalent to centering the puzzle at the origin. If, however, n = 2a is even, then we let the coordinates of the cubies in each direction range over the set {âa, â(a â 1), . . . , â1} ⪠{1, . . . , a â 1, a}. In this case, the coordinate scheme does not correspond with a standard coordinate sheme no matter how we translate the cube. This coordinate scheme is a good idea for the following reason: under this scheme, if a move relocates a sticker, the coordinates of that sticker remain the same up to permutation and negation. | 1706.06708#10 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 10 | # 3 NETWORK ARCHITECTURE
Our architecture for video classiï¬cation is illustrated in Fig- ure 2 and contains three main modules. First, the input features are extracted from video and audio signals. Next, the pooling module aggregates the extracted features into a single compact (e.g. 1024-dimensional) representation for the entire video. This
2
pooling module has a two-stream architecture treating visual and audio features separately. The aggregated representation is then enhanced by the Context Gating layer (section 3.1). Finally, the classiï¬cation module takes the resulting representation as input and outputs scores for a pre-deï¬ned set of labels. The classiï¬cation module adopts the Mixture-of-Experts [42] classiï¬er as described in [19], followed by another Context Gating layer.
# 3.1 Context Gating
The Context Gating (CG) module transforms the input feature representation X into a new representation Y as
Y = Ï(W X + b) ⦠X, (1) | 1706.06905#10 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 10 | An action a is applicable in a state s if [Pre(a)]* = true, and the state s, that results from the action a in s satisfies the equation f*«(t*) = w* for all the effects f(t) := w of a, and otherwise is equal to s. This means that the action a changes the value of the state variable f(c) to w* in the state s iff there is an effect f(t) := w of action a such that t* = c. For example, if X = 2 is true in s, the update X := X +1 increases the value of X to 3 without affecting other state variables. Similarly, if loc(b) = bâ is true in s, the update clear (loc(b)) := true in s is equivalent to clear(bâ) := true. A problem is a tuple P = (S,I,O,G, F) where S includes the non-standard symbols (fixed and fluent) and their types, the atoms J and the procedures in Fâ provide the initial denotation So of such symbols, O stands for the actions, and G is the goal. A plan for P is a sequence of applicable actions from O that maps the state so into a state s | 1706.06927#10 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 10 | Deep Crossing [21], Wide&Deep Learning [4] and YouTube Rec- ommendation CTR model [3] extend LS-PLM and FM by replacing the transformation function with complex MLP network, which enhances the model capability greatly. PNN[5] tries to capture high-order feature interactions by involving a product layer after embedding layer. DeepFM[10] imposes a factorization machines as "wide" module in Wide&Deep [4] with no need of feature engineer- ing. Overall, these methods follow a similar model structure with combination of embedding layer (for learning the dense represen- tation of sparse features) and MLP (for learning the combination relations of features automatically). This kind of CTR prediction model reduces the manual feature engineering jobs greatly. Our base model follows this kind of model structure. However in appli- cations with rich user behaviors, features are often contained with variable-length list of ids, e.g., searched terms or watched videos in YouTube recommender system [3]. These models often transform corresponding list of embedding vectors into a fixed-length vector via sum/average pooling, which causes loss of information. The proposed DIN tackles it by adaptively learning the representation vector w.r.t. given ad, improving the expressive ability of model. | 1706.06978#10 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 11 | Next, we need a way to distinguish the sets of cubies aï¬ected by a move from each other. In the Rubikâs Square, there are two types of moves. The ï¬rst type of move, which we will call a row move or a y move, aï¬ects all the cubies with some particular y coordinate. The second type of move, which we will call a column move or an x move aï¬ects all the cubies with some particular x coordinate. We will refer to the set of cubies aï¬ected by a row move as a row and refer to the set of cubies aï¬ected by a column move as a column. In order to identify a move, we must identify which row or column is being ï¬ipped, by specifying whether the move is a row or column move as well as the index of the coordinate shared by all the moved cubies (e.g. the index â5 row move is the move that aï¬ects the cubies with y = â5).
In the Rubikâs Cube, each STM/SQTM move aï¬ects a single slice of n2 cubies sharing some coordinate. If the cubies share an x (or y or z) coordinate, then we call the slice an x (or y or
3 | 1706.06708#11 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 11 | The Context Gating (CG) module transforms the input feature representation X into a new representation Y as
Y = Ï(W X + b) ⦠X, (1)
where X â Rn is the input feature vector, Ï is the element- wise sigmoid activation and ⦠is the element-wise multiplication. W â RnÃn and b â Rn are trainable parameters. The vector of weights Ï(W X + b) â [0, 1] represents a set of learned gates applied to the individual dimensions of the input feature X. | 1706.06905#11 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 11 | symbols, O stands for the actions, and G is the goal. A plan for P is a sequence of applicable actions from O that maps the state so into a state s that satisfies G. It is assumed that standard symbols like â+â, 1, etc. have their standard denotation. Fixed functional symbols f whose denotation is given by means of procedures in F are written as @f. The denotation of the other functional symbols must be given extensionally in I. | 1706.06927#11 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 11 | Attention mechanism originates from Neural Machine Transla- tion (NMT) field [1]. NMT takes a weighted sum of all the annota- tions to get an expected annotation and focuses only on information relevant to the generation of next target word. A recent work, Deep- Intent [26] applies attention in the context of search advertising. Similar to NMT, they use RNN[24] to model text, then learn one global hidden vector to help paying attention on the key words in each query. It is shown that the use of attention can help capturing the main intent of query or ad. DIN designs a local activation unit to soft-search for relevant user behaviors and takes a weighted sum pooling to obtain the adaptive representation of user interests with respect to a given ad. The user representation vector varies over different ads, which is different from DeepIntent in which there is no interaction between ad and user.
We make code publicly available, and further show how to suc- cessfully deploy DIN in one of the worldâs largest advertising sys- tems with novel developed techniques for training large scale deep networks with hundreds of millions of parameters.
3 BACKGROUND In e-commerce sites, such as Alibaba, advertisements are natural goods. In the rest of this paper, without special declaration, we re- gard ads as goods. Figure 1 briefly illustrates the running procedure
User History Behaviors | 1706.06978#11 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 12 | 3
z) slice. As with the Rubikâs Square, we identify the slice by its normal direction together with its cubiesâ index in that direction (e.g. the x = 3 slice). We will also refer to the six slices at the boundaries of the Cube as face slices (e.g. the +x face slice).
A move in a Rubikâs Cube can be named by identifying the slice being rotated and the amount of rotation. We split this up into the following ï¬ve pieces of information: the normal direction to the slice, the sign of the index of the slice, the absolute value of the index of the slice, the amount of rotation, and the direction of rotation. Splitting the information up in this way allows us not only to refer to individual moves (by specifying all ï¬ve pieces of information) but also to refer to interesting sets of moves (by omitting one or more of the pieces of information). | 1706.06708#12 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 12 | The motivation behind this transformation is two-fold. First, we wish to introduce non-linear interactions among activations of the input representation. Second, we wish to recalibrate the strengths of different activations of the input representation through a self-gating mechanism. The form of the Context Gating layer is inspired by the Gated Linear Unit (GLU) introduced re- cently for language modeling [39] that considers a more complex class of transformations given by Ï(W1X + b1) ⦠(W2X + b2), with two sets of learnable parameters W1, b1 and W2, b2. Compared to the the Gated Linear Unit [39], our Context Gating in (1) (i) reduces the number of learned parameters as only one set of weights is learnt, and (ii) re-weights directly the input vector X (instead of its linear transformation) and hence is suitable for situations where X has a speciï¬c meaning, such the score of a class label, that is preserved by the layer. As shown in Figure 2, we use Context Gating in the feature pooling and classiï¬cation modules. First, we use CG to transform the feature vector before passing it to the classiï¬cation | 1706.06905#12 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 12 | # A. State Constraints
While we will make use of a small fragment of Functional STRIPS, we will also need a convenient extension; namely, state constraints [18, 30]. State constraints are formulas that are forced to be true in all reachable states, something achieved by interpreting state constraints as implicit action precondi- tions. State constraints are not to be confused with state invari- ants that refer to formulas that are true in all reachable states without imposing extra constraints on actions. For example, in the blocks world, the formula describing that no block is on
# (:action MoveBase | 1706.06927#12 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 12 | User History Behaviors
Figure 1: Illustration of running procedure of display adver- tising system in Alibaba, in which user behavior data plays important roles.
of display advertising system in Alibaba, which consists of two main stages: i) matching stage which generates list of candidate ads relevant to the visiting user via methods like collaborative filtering, ii) ranking stage which predicts CTR for each given ad and then selects top ranked ones. Everyday, hundreds of millions of users visit the e-commerce site, leaving us with lots of user behavior data which contributes critically in building matching and ranking mod- els. It is worth mentioning that users with rich historical behaviors contain diverse interests. For example, a young mother has browsed goods including woolen coat, T-shits, earrings, tote bag, leather handbag and childrenâs coat recently. These behavior data give us hints about her shopping interests. When she visits the e-commerce site, system displays a suitable ad to her, for example a new hand- bag. Obviously the displayed ad only matches or activates part of interests of this mother. In summary, interests of user with rich behaviors are diverse and could be locally activated given certain ads. We show later in this paper making use of these characteristics plays important role for building CTR prediction model. | 1706.06978#12 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 13 | To identify the normal direction to a slice, we simply specify x, y, or z; for example, we could refer to a move as an x move whenever the rotating slice is normal to the x direction. We will use two methods to identify the sign of the index of a moved slice. Sometimes we will refer to positive moves or negative moves, and sometimes we will combine this information with the normal direction and specify that the move is a +x, âx, +y, ây, +z, or âz move. We use the term index-v move to refer to a move rotating a slice whose index has absolute value v. In the particular case that the slice rotated is a face slice, we instead use the term face move. We refer to a move as a turn if the angle of rotation is 90⦠and as a ï¬ip if the angle of rotation is 180â¦. In the case that the angle of rotation is 90â¦, we can specify further by using the terms clockwise turn and counterclockwise turn. We make the notational convention that clockwise and counterclockwise rotations around the x, y, or z axes are labeled according to the direction of rotation when looking from the direction of positive x, y, or z. | 1706.06708#13 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06927 | 13 | # (:action MoveBase
:parameters (?e - base-graph-traj-id) :prec (and (= Arm ca0) (= Base (@source-b ?e)) :eff (and (:= Base (@target-b ?e)))) (:action MoveArm :parameters (?t - arm-graph-traj-id) :prec (and (= Arm (@source-a ?t)) :eff (and (:= Arm (@target-a ?t)) (:= Traj ?t))) (:action Grasp :parameters (?o - object-id) :prec (and (= Hold None) (@graspable Base Arm (Conf ?o))) :eff (and (:= Hold ?o) (:= (Conf ?o) c-held))) (:action Place :parameters (?o - object-id) :prec (and (= Hold ?o) (@placeable Base Arm) :eff (and (:= Hold None) (:= (Conf ?o)(@place Base Arm))) (:state-constraint :parameter (?o - object-id) (@non-overlap Base Traj (Conf ?o) Hold)) | 1706.06927#13 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 13 | 4 DEEP INTEREST NETWORK Different from sponsored search, users come into display adver- tising system without explicitly expressed intentions. Effective ap- proaches are required to extract user interests from rich historical behaviors when building the CTR prediction model. Features that depict users and ads are the basic elements in the CTR modeling of advertisement system. Making use of these features reasonably and mining information from them are critical. | 1706.06978#13 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 14 | We also extend the same naming conventions to the Rubikâs Square moves. For example, a positive row move is any row move with positive index and an index-v move is any move with index ±v.
# 2.4 Group-theoretic approach
An alternative way to look at the Rubikâs Square and Rubikâs Cube problems is through the lens of group theory. The transformations that can be applied to a Rubikâs Square or Rubikâs Cube by a sequence of moves form a group with composition as the group operation. Deï¬ne RSn to be the group of possible sticker permutations in an n à n Rubikâs Square and deï¬ne RCn to be the group of possible sticker permutations in an n à n à n Rubikâs Cube.
Consider the moves possible in an n à n Rubikâs Square or an n à n à n Rubikâs Cube. Each such move has a corresponding element in group RSn or RCn. | 1706.06708#14 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 14 | # 3.2 Relation to residual connections
Residual connections have been introduced in [1]. They demon- strate faster and better training of deep convolutional neural networks as well as better performance for a variety of tasks. Residual connections can be formulated as
Y = f (W X + b) + X, (2)
where X are the input features, (W, b) the learnable parameters of the linear mapping (or it can be a convolution), f is a non- linearity (typically Rectiï¬er Linear Unit as expressed in [1]). One advantage of residual connections is the possibility of gradient propagation directly into X during training, avoiding the vanish- ing gradient problem. To show this, the gradient of the residual connection can be written as:
âY = â(f (W X + b)) + âX. (3)
One can notice that the gradient âY is the sum of the gradient of the previous layer âX and the gradient â(f (W X + b)). The
Input Gates x Input Output Snow DO.9x a â Tree 0.1Xe3 â a Ski 0.9X ma â> x x
Fig. 3: Illustration of Context Gating that down-weights visual activations of Tree for a skiing scene. | 1706.06905#14 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 14 | Fig. 1: CTMP Model Fragment in Functional STRIPS: Action and state constraint schemas. Abbreviations used. Symbols preceded by â@â denote procedures. All objects assumed to have the same shape. Initial situation pro- vides initial values for the state variables Base, Arm (resting), T raj (dummy), and Conf (o) for each object. Goals describe target object conï¬gurations. State constraints prevent collisions during arm motions. Motion planners and collision checkers used at compilation time, not at plan time, as detailed in the Preprocessing section.
two blocks at the same time is a state invariant. On the other hand, if we assert the formula ¬[on(b3, b4) ⧠on(b4, b5)] as a state constraint, we are ruling out actions leading to states where the formula [on(b3, b4) ⧠on(b4, b5)] holds. | 1706.06927#14 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 14 | 4.1 Feature Representation Data in industrial CTR prediction tasks is mostly in a multi-group categorial form, for example, [weekday=Friday, gender=Female, visited_cate_ids={Bag,Book}, ad_cate_id=Book], which is normally transformed into high-dimensional sparse binary features via en- coding [4, 19, 21]. Mathematically, encoding vector of i-th feature group is formularized as t; ¢ RXâ. Kj denotes the dimensionality of feature group i, which means feature group i contains K; unique ids. t;Lj] is the j-th element of t; and t;[j] ⬠{0, 1}. we ti[j] = k. Vec- tor t; with k = 1 refers to one-hot encoding and k > 1 refers to multi-hot encoding. Then one instance can be represent as x= (t7, tT, ALT in a group-wise manner, where M is num- ber of feature groups, Dy K; = K, K is dimensionality of the entire feature space. In this way, the aforementioned instance with
3
Table 1: Statistics of feature sets used in the display adver- tising system in Alibaba. Features are composed of sparse binary vectors in the group-wise manner. | 1706.06978#14 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 15 | For the Rubikâs Square, let x; ⬠RS,, be the transformation of flipping the column with index i in an n x n Rubikâs Square and let y; be the transformation of flipping the row with index i in the Square. Then if J is the set of row/column indices in an n x n Rubikâs Square we have that RS; is generated by the set of group elements Ujc;{i, yi}Similarly, for the Rubikâs Cube, let x, y;, and z; in RCp be the transformations corresponding to clockwise turns of x, y, or z slices with index 7. Then if I is the set of slice indices in ann xnxn Rubikâs Cube we have that RC; is generated by the set of group elements U,<;{2i, yi; 2i}Using these groups we obtain a new way of identifying puzzle conï¬gurations. Let C0 be a canonical solved conï¬guration of a Rubikâs Square or Rubikâs Cube puzzle. For the n à n Rubikâs Square, deï¬ne C0 to have top face red, bottom face blue, and the other four faces green, orange, yellow, and white in some | 1706.06708#15 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 15 | Fig. 3: Illustration of Context Gating that down-weights visual activations of Tree for a skiing scene.
vanishing gradient problem is overcome thanks to the term âX, which allows the gradient to backpropagate directly from Y to X without decreasing in the norm. A similar effect is observed with Context Gating which has the following gradient equation:
âY = â(Ï(W X + b)) ⦠X + Ï(W X + b) ⦠âX.
In this case, the term âX is weighted by activations Ï(W X + b). Hence, for dimensions where Ï(W X +b) are close to 1, gradients are directly propagated from Y to X. In contrast, for values close to 0 the gradient propagation is vanished. This property is valuable as it allows to stack several non-linear layers and avoid vanishing gradient problems.
# 3.3 Motivation for Context Gating
Our goal is to predict human-generated tags for a video. Such tags typically represent only a subset of objects and events which are most relevant to the context of the video. To mimic this behavior and to suppress irrelevant labels, we introduce the Context Gating module both to re-weight the features and the output labels of our architecture. | 1706.06905#15 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 15 | A Functional STRIPS problem with state constraints is a tuple Pâ = (S,I,0,G,C, F) where the new component C stands for a set of formulas expressing the state constraints. The syntax for these formulas is the same as for those encoding (explicit) action preconditions but their semantics is different: an action a is deemed applicable in a state s when both [Pre(a)|* = true and the state s, that results from applying a to s is such that c** = true for every state constraint c ⬠C. A plan for Pâ is thus a sequence of actions from O that maps the state sp into a state s that satisfies G, and such that for each such action a, Pre(a) is true in the state s where the action s is applied, and all constraints in C are true in the resulting state. It is assumed that the state constraints hold in the initial state.
# III. MODELING PICK-AND-PLACE PROBLEMS
We consider CTMP problems involving a robot and a number of objects located on tables of the same height. The tasks involve moving some objects from some initial conï¬guration to a ï¬nal conï¬guration or set of conï¬gurations, which may require moving obstructing objects as well. The | 1706.06927#15 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 15 | 3
Table 1: Statistics of feature sets used in the display adver- tising system in Alibaba. Features are composed of sparse binary vectors in the group-wise manner.
Category Feature Group Dimemsionality Type #Nonzero Ids per Instance User Profile Features User Behavior Features Ad Features gender age_level ... visited_goods_ids visited_shop_ids visited_cate_ids goods_id shop_id cate_id ... 2 â¼ 10 ... â¼ 109 â¼ 107 â¼ 104 â¼ 107 â¼ 105 â¼ 104 ... one-hot one-hot ... multi-hot multi-hot multi-hot one-hot one-hot one-hot ... 1 1 ... â¼ 103 â¼ 103 â¼ 102 1 1 1 ... Context Features pid time ... â¼ 10 â¼ 10 ... one-hot one-hot ... 1 1 ...
four groups of features are illustrated as:
[0,0,0,0,1,0,0] âe-â-"â weekday=Friday
[0,0,0,0,1,0,0] [0,1] _ââ [0,..1,.1,..0) [0,..,1,...,0]
# [0,1] _ââ gender=Female
# visited_cate_ids={Bag,Book} [0,..1,.1,..0)
ad_cate_id=Book [0,..,1,...,0] | 1706.06978#15 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 16 | n Rubikâs Square, deï¬ne C0 to have top face red, bottom face blue, and the other four faces green, orange, yellow, and white in some ï¬xed order. For the n à n à n Rubikâs Cube, let C0 have the following face colors: the +x face is orange, the âx face is red, the +y face is green, the ây face is yellow, the +z face is white, and the âz face is blue. Then from any element of RSn or RCn, we can construct | 1706.06708#16 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 16 | Capturing dependencies among features. Context Gating can help creating dependencies between visual activations. Take an example of a skiing video showing a skiing person, snow and trees. While network activations for the Tree features might be high, trees might be less important in the context of skiing where people are more likely to comment about the snow and skiing rather than the forest. Context Gating can learn to down-weight visual activations for Tree when it co-occurs with visual activations for Ski and Snow as illustrated in Figure 3.
Capturing prior structure of the output space. Context Gating can also create dependencies among output class scores when applied to the classiï¬cation layer of the network. This makes it possible to learn a prior structure on the output probability space, which can be useful in modeling biases in label annotations. | 1706.06905#16 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 16 | model is tailored to a PR2 robot using a single arm, but can be generalized easily.
The main state variables Base, Arm, and Hold denote the conï¬guration of the robot base, the arm conï¬guration, and the content of the gripper, if any. In addition, for each object o, the state variable Conf(o) denotes the conï¬guration of object o. The conï¬guration of the robot base represents the 2D position of the base and its orientation angle. The conï¬guration of the robot arm represents the conï¬guration of the end effector: its 3D position, pitch, roll, and yaw. Finally, object conï¬gurations are 3D positions, as for simplicity we consider object that are symmetric, and hence their orientation angle is not relevant. There is also a state variable T raj, encoding the last trajectory followed by the robot arm, which is needed for checking collisions during arm motions. All conï¬gurations and trajecto- ries are obtained from a preprocessing stage, described in the next section, and are represented in the planning encoding by symbolic ids. When plans are executed, trajectory ids become motion plans; i.e. precompiled sequences of base and arm join vectors, not represented explicitly in the planning problem. | 1706.06927#16 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 16 | ad_cate_id=Book [0,..,1,...,0]
weekday=Friday gender=Female visited_cate_ids={Bag,Book} ad_cate_id=Book
The whole feature set used in our system is described in Table 1. It is composed of four categories, among which user behavior features are typically multi-hot encoding vectors and contain rich information of user interests. Note that in our setting, there are no combination features. We capture the interaction of features with deep neural network.
4.2 Base Model(Embedding&MLP) Most of the popular model structures [3, 4, 21] share a similar Embedding&MLP paradigm, which we refer to as base model, as shown in the left of Fig.2. It consists of several parts:
Embedding layer. As the inputs are high dimensional binary vectors, embedding layer is used to transform them into low di- mensional dense representations. For the i-th feature group of ti , ] â RDÃKi represent the i-th embed- let Wi = [wi j , ..., wi Ki ding dictionary, where wi â RD is an embedding vector with di- j mensionality of D. Embedding operation follows the table lookup mechanism, as illustrated in Fig.2. | 1706.06978#16 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 17 | 4
a conï¬guration of the corresponding puzzle by applying that element to C0. In other words, every transformation t â RSn or t â RCn corresponds with the conï¬guration Ct = t(C0) of the n à n Rubikâs Square or n à n à n Rubikâs Cube that is obtained by applying t to C0.
Using this idea, we deï¬ne a new series of problems:
Problem 3. The Group Rubikâs Square problem has as input a transformation t â RSn and a value k. The goal is to decide whether the transformation t can be reversed by a sequence of at most k transformations corresponding to Rubikâs Square moves. In other words, the answer is âyesâ if and only if the transformation t can be reversed by a sequence of at most k transformations of the form xi or yi.
Problem 4. The Group STM/SQTM Rubikâs Cube problem has as input a transformation t â RCn and a value k. The goal is to decide whether the transformation t can be reversed by a sequence of at most k transformations corresponding with legal Rubikâs Cube moves under move count metric STM/SQTM. | 1706.06708#17 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 17 | 4 LEARNABLE POOLING METHODS Within our video classiï¬cation architecture described above, we investigate several types of learnable pooling models, which we describe next. Previous successful approaches [18], [19] employed recurrent neural networks such as LSTM or GRU for the encoding of the sequential features. We chose to focus on non-recurrent aggregation techniques. This is motivated by several factors: ï¬rst, recurrent models are computationally demanding for long tem- poral sequences as it is not possible to parallelize the sequential computation. Moreover, it is not clear if treating the aggregation problem as a sequence modeling problem is necessary. As we show in our experiments, there is almost no change in performance if we shufï¬e the frames in a random order as almost all of the
3
relevant signal relies on the static visual cues. All we actually need to do is to ï¬nd a way to efï¬ciently remember all of the relevant visual cues. We will ï¬rst review the NetVLAD [23] aggregation module and then explain how we can exploit the same idea to imitate Fisher Vector and Bag-of-visual-Words aggregation scheme.
# 4.1 NetVLAD aggregation | 1706.06905#17 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 17 | The encoding assumes two ï¬nite graphs: a base graph, where the nodes stand for robot base conï¬gurations and edges stand for trajectories among pairs of base conï¬gurations, and an arm graph, where nodes stand for end-effector conï¬gu- rations (relative to a ï¬xed base), and edges stand for arm trajectories among pairs of such conï¬gurations. The details for how such graphs are generated are not relevant for the planning encoding and will be described below. As a reference, we will consider instances with tens of objects, and base and arm graphs with hundreds of conï¬gurations each, representing thousands of robot conï¬gurations.
the A fragment of the planning encoding featuring all actions and the state constraints is shown in Figure 1. Actions M oveBase(e) take an edge e from the base graph as an argument, and update the base conï¬guration of the robot to the target conï¬guration associated with the edge. The precondition is that the source conï¬guration of the edge corresponds to the current base conï¬guration, and that the arm is the resting conï¬guration ca0. Actions M oveArm(t) work in the same way, but the edges t of the arm graph are used instead. | 1706.06927#17 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 17 | ⢠If ti is one-hot vector with j-th element ti [j] = 1, the em- bedded representation of ti is a single embedding vector ei = wi j .
⢠If ti is multi-hot vector with ti [j] = 1 for j â {i1, i2, ..., ik }, the embedded representation of ti is a list of embedding vectors: {ei1 , ei2 , ...eik } = {wi i1
Pooling layer and Concat layer. Notice that different users have different numbers of behaviors. Thus the number of non-zero values for multi-hot behavioral feature vector ti varies across in- stances, causing the lengths of the corresponding list of embedding vectors to be variable. As fully connected networks can only handle fixed-length inputs, it is a common practice [3, 4] to transform the list of embedding vectors via a pooling layer to get a fixed-length vector:
# ei = pooling(ei1 , ei2 , ...eik ).
ei = pooling(ei1 , ei2 , ...eik ). (1) | 1706.06978#17 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 18 | We can interpret these problems as variants of the Rubikâs Square or Rubikâs Cube problems. For example, the Rubikâs Square problem asks whether it is possible (in a given number of moves) to unscramble a Rubikâs Square conï¬guration so that each face ends up monochromatic, while the Group Rubikâs Square problem asks whether it is possible (in a given number of moves) to unscramble a Rubikâs Square conï¬guration so that each sticker goes back to its exact position in the originally solved conï¬guration C0. As you see, the Group Rubikâs Square problem, as a puzzle, is just a more diï¬cult variant of the puzzle: instead of asking the player to move all the stickers of the same color to the same face, this variant asks the player to move each stickers to the exact correct position. Similarly, the Group STM/SQTM Rubikâs Cube problem as a puzzle asks the player to move each sticker to an exact position. These problems can have practical applications with physical puzzles. For example, some Rubikâs Cubes have pictures split up over the stickers of each | 1706.06708#18 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 18 | # 4.1 NetVLAD aggregation
The NetVLAD [23] architecture has been proposed for place recognition to reproduce the VLAD encoding [15], but in a differ- entiable manner, where the clusters are tuned via backpropagation instead of using k-means clustering. It was then extended to action recognition in video [26]. The main idea behind NetVLAD is to write the descriptor xi hard assignment to the cluster k as a soft assignment:
evr ai tby ag(xi) =
where (wj)j and (bj)j are learnable parameters. In other words, the soft assignment ak(xi) of descriptor xi to cluster k measures on a scale from 0 to 1 how close the descriptor xi is to cluster k. In the hard assignment way, ak(xi) would be equal to 1 if xi closest cluster is cluster k and 0 otherwise. For the rest of the paper, ak(xi) will deï¬ne soft assignment of descriptor xi to cluster k. If we write cj, j â [1, K] the j-th learnable cluster, the NetVLAD descriptor can be written as
= Saute VLAD(j,k â ck(J)); (6)
which computes the weighted sum of residuals xi â ck of descrip- tors xi from learnable anchor point ck in cluster k. | 1706.06905#18 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 18 | There are also actions Grasp(o) and Place(o) for grasping and placing objects o. The grasping action requires that the gripper is empty and that @graspable(Base,Arm,Conf(o)) is true, where the procedure denoted by the symbol @graspable checks if the robot conï¬guration, as determined by its base and (relative) arm conï¬guration, is such that object o in its current conï¬guration can be grasped by just closing the gripper. Like- wise, the atoms Hold = o and @placeable(Base,Arm,Conf(o)) are preconditions of the action P lace(o) .
The total number of ground actions is given by the sum of the number of edges in the two graphs and the number of objects. This small number of actions is made possible by the planning language where robot, arm, and object conï¬gurations do not appear as action arguments. The opposite would be true in a STRIPS encoding where action effects are determined solely by the action (add and delete lists) and do not depend | 1706.06927#18 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 18 | ei = pooling(ei1 , ei2 , ...eik ). (1)
(Output) â [Softmax (2) (Pret (200) â] â_âs eon (Concat & Flatten) _â_ âSUM Pooling) --/ = (Concat) â on icat] (Concat) " He 6.6 @00 User Profile Goods 1 Features Embedding Layer @00 eco 46.6 ~ Goods N Context: Features 00-0 User Profile Features Goods 2 dita Candidate User Behaviors Ad Base Model Output) (Softmax (2) PRELU/Dice (80) PReLU/Dice (200) | Activation Weight 1 Linear GD] [PRelu/Dice Ge) âme soon (Concat & Flatten) (SUM Pooling) res a Inputs from User Inputs from Ad Activation Unit ® Goods 1 Weight |Goods 2 Weight Goods N Weight [Activation } âAetivation Unit t [Activation Unit t Unit, t ® Product © Goods 10 © shop 1D © Cate 1D © Other ID é } b4d4 (Fe layer eee Candidate Context User Béhaviors Ad Features Deep Interest Network oan (Concat} =e (Coneat) = (Concat) $46 Goods 1 Goods 2 Goods N | 1706.06978#18 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 19 | position. These problems can have practical applications with physical puzzles. For example, some Rubikâs Cubes have pictures split up over the stickers of each face instead of just monochromatic colors on the stickers. For these puzzles, as long as no two stickers are the same, the Group STM/SQTM Rubikâs Cube problem is more applicable than the STM/SQTM Rubikâs Cube problem (which can leave a face âmonochromaticâ but scrambled in image). | 1706.06708#19 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 19 | which computes the weighted sum of residuals xi â ck of descrip- tors xi from learnable anchor point ck in cluster k.
# 4.2 Beyond NetVLAD aggregation
By exploiting the same cluster soft-assignment idea, we can also imitate similar operations as the traditional Bag-of-visual- words [20], [21] and Fisher Vectors [22] in a differentiable manner. For bag-of-visual-words (BOW) encoding, we use soft- assignment of descriptors to visual word clusters [23], [43] to obtain a differentiable representation. The differentiable BOW representation can be written as:
N BOW(k) = 3° ax(ai). (7) i=1 | 1706.06905#19 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 19 | on the state. The number of state variables is also small, namely, one state variable for each object and four other state variables. Atoms whose predicate symbols denote procedures, like @graspable(Base,Arm,Conf(o)), do not represent state variables or ï¬uents, as the denotation of such predicates is ï¬xed and constant. These procedures play a key role in the encoding, and in the next section we look at the preprocessing that converts such procedures into fast lookup operations. | 1706.06927#19 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 19 | Figure 2: Network Architecture. The left part illustrates the network of base model (Embedding&MLP). Embeddings of cate_id, shop_id and goods_id belong to one goods are concatenated to represent one visited goods in userâs behaviors. Right part is our proposed DIN model. It introduces a local activation unit, with which the representation of user interests varies adaptively given different candidate ads.
Two most commonly used pooling layers are sum pooling and average pooling, which apply element-wise sum/average operations to the list of embedding vectors.
Both embedding and pooling layers operate in a group-wise manner, mapping the original sparse features into multiple fixed- length representation vectors. Then all the vectors are concatenated together to obtain the overall representation vector for the instance.
MLP. Given the concatenated dense representation vector, fully connected layers are used to learn the combination of features auto- matically. Recently developed methods [4, 5, 10] focus on designing structures of MLP for better information extraction.
Loss. The objective function used in base model is the negative log-likelihood function defined as:
Le-= sw (x,yeS log p(x) + (1â y)log(1âp(x))), (2)
training data and add the burden of computation and storage, which may not be tolerated for an industrial online system. | 1706.06978#19 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 20 | We formalize the idea that the Group version of the puzzle is a strictly more diï¬cult puzzle in the following lemmas:
Lemma 2.1. If (t, k) is a âyesâ instance to the Group Rubikâs Square problem, then (t(C0), k) is a âyesâ instance to the Rubikâs Square problem.
Lemma 2.2. If (t, k) is a âyesâ instance to the Group STM/SQTM Rubikâs Cube problem, then (t(C0), k) is a âyesâ instance to the STM/SQTM Rubikâs Cube problem. | 1706.06708#20 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 20 | N BOW(k) = 3° ax(ai). (7) i=1
Notice that the exact bag-of-visual-words formulation is repro- duced if we replace the soft assignment values by its hard assignment equivalent. This formulation is closely related to the Neural BoF formulation [44], but differs in the way of computing the soft assignment. In detail, [44] performs a softmax operation over the computed L2 distances between the descriptors and the cluster centers, whereas we use soft-assignment given by eq. (5) where parameters w are learnable without explicit relation to computing L2 distance to cluster centers. It also relates to [45] that uses a recurrent neural network to perform the aggregation. The advantage of BOW aggregation over NetVLAD is that it aggregates a list of feature descriptors into a much more compact representation, given a ï¬xed number of clusters. The drawback is that signiï¬cantly more clusters are needed to obtain a rich representation of the aggregated descriptors.
Inspired by Fisher Vector [22] encoding, we also experimented with modifying the NetVLAD architecture to allow learning of second order feature statistics within the clusters. We will denote this representation as NetFV (for Net Fisher Vectors) as it aims at imitating the standard Fisher Vector encoding [22]. Reusing the previously established soft assignment notation, we can write the NetFV representation as | 1706.06905#20 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 20 | The only subtle aspect of the encoding is in the state con- straints used to prevent collisions. Collisions are to be avoided not just at beginning and end of actions, but also during action execution. For simplicity, we assume that robot-base moves do not cause collisions (with mobile objects), and hence that collisions result exclusively from arm motions. We enforce this by restricting the mobile objects to be on top of tables that are ï¬xed, and by requiring the arm to be in a suitable resting conï¬guration (ca0) when the robot base moves. There is one state constraint @nonoverlap(Base,Traj,Conf(o),Hold) for each object o, where T raj is the state variable that keeps track of the last arm trajectory executed by the robot. The procedure denoted by the symbol @nonoverlap tests whether a collision occurs between object o in conï¬guration Conf(o) when the robot arm moves along the trajectory T raj and the robot base conï¬guration is Base. The test depends also on whether the gripper is holding an object or not. As we will show in the next section, this procedure is also computed from two overlap tables that are precompiled by calling the MoveIt collision-checker [32] a number of times that is twice the number of edges (trajectories) in the arm graph. | 1706.06927#20 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 20 | training data and add the burden of computation and storage, which may not be tolerated for an industrial online system.
Is there an elegant way to represent userâs diverse interests in one vector under limited dimension? The local activation char- acteristic of user interests gives us inspiration to design a novel model named deep interest network(DIN). Imagine when the young mother mentioned above in section 3 visits the e-commerce site, she finds the displayed new handbag cute and clicks it. Letâs dissect the driving force of click action. The displayed ad hits the related interests of this young mother by soft-searching her historical be- haviors and finding that she had browsed similar goods of tote bag and leather handbag recently. In other words, behaviors related to displayed ad greatly contribute to the click action. DIN simulates this process by paying attention to the representation of locally activated interests w.r.t. given ad. Instead of expressing all userâs diverse interests with the same vector, DIN adaptively calculate the representation vector of user interests by taking into considera- tion the relevance of historical behaviors w.r.t. candidate ad. This representation vector varies over different ads. | 1706.06978#20 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 21 | The proof of each of these lemmas is the same. If (t, k) is a âyesâ instance to the Group variants of the puzzle problems, then t can be inverted using at most k elements corresponding to moves. Applying exactly those moves to t(C0) yields conï¬guration C0, which is a solved conï¬guration of the cube. Thus it is possible to solve the puzzle in conï¬guration t(C0) in at most k moves. In other words, (t(C0), k) is a âyesâ instance to the non-Group variant of the puzzle problem.
At this point it is also worth mentioning that the Rubikâs Square with SQTM move model is a strictly more diï¬cult puzzle than the Rubikâs Square with STM move model:
Lemma 2.3. If (C, k) is a âyesâ instance to the SQTM Rubikâs Cube problem, then it is also a âyesâ instance to the STM Rubikâs Cube problem. Similarly, if (t, k) is a âyesâ instance to the
5
Group SQTM Rubikâs Cube problem, then it is also a âyesâ instance to the Group STM Rubikâs Cube problem. | 1706.06708#21 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 21 | FVI(j,k = Saul i (ena) (8) J) FV2(j,k) =
where F V 1 is capturing the ï¬rst-order statistics, F V 2 is capturing ck, k â [1, K] are the learnable the second-order statistics, clusters and Ïk, k â [1, K] are the clustersâ diagonal covariances. To deï¬ne Ïk, k â [1, K] as positive, we ï¬rst randomly initialize their value with a Gaussian noise with unit mean and small variance and then take the square of the values during training so that they stays positive. In the same manner as NetVLAD, ck and Ïk are learnt independently from the parameters of the soft- assignment ak. This formulation differs from [38], [46] as we are not exactly reproducing the original Fisher Vectors. Indeed the parameters ak(xi), ck and Ïk are decoupled from each other. As opposed to [38], [46], these parameters are not related to a Gaussian Mixture Model but instead are trained in a discriminative manner. | 1706.06905#21 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 21 | # IV. PREPROCESSING
The planning encoding shown in Fig. 1 assumes a crucial preprocessing stage where the base and arm graphs are com- puted, and suitable tables are stored for avoiding the use of motion planners and collision checkers during planning time. This preprocessing is efï¬cient and does not depend on the number of objects, meaning it can be used for several problem variations without having to call collision checkers and motion planners again. Indeed, except for the overlap tables, the rest of the compilation is local and does not depend on the possible robot base conï¬gurations at all.
To achieve this, we consider the robot at a virtual base Bo = (x,y, 0) with « = y = 4 = 0 in front of a virtual table whose height is the height of the actual tables, and whose dimensions exceed the (local) space that the robot can reach without moving the base. By considering the robot acting in this local virtual space without moving from this virtual base Bo, we will obtain all the relevant information about object configurations and arm trajectories, that will carry to the real robot base configurations B through a simple linear transformations that depend on B. The computation of the overlap tables is more subtle and will be considered later. | 1706.06927#21 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 21 | where S is the training set of size N , with x as the input of the network and y â {0, 1} as the label, p(x) is the output of the network after the softmax layer, representing the predicted probability of sample x being clicked.
4.3 The structure of Deep Interest Network Among all those features of Table 1, user behavior features are critically important and play key roles in modeling user interests in the scenario of e-commerce applications.
The right part of Fig.2 illustrates the architecture of DIN. Com- pared with base model, DIN introduces a novel designed local acti- vation unit and maintains the other structures the same. Specifically, activation units are applied on the user behavior features, which performs as a weighted sum pooling to adaptively calculate user representation vU given a candidate ad A, as shown in Eq.(3) | 1706.06978#21 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 22 | 5
Group SQTM Rubikâs Cube problem, then it is also a âyesâ instance to the Group STM Rubikâs Cube problem.
To prove this lemma, note that every move in the SQTM move model is a legal move in the STM move model. Then if conï¬guration C can be solved in k or fewer SQTM moves, it can certainly also be solved in k or fewer STM moves. Similarly, if t can be inverted using at most k transformations corresponding to SQTM moves, then it can also be inverted using at most k transformations corresponding to STM moves.
# 2.5 Membership in NP
Consider the graph whose vertices are transformations in RSn (or RCn) and whose edges (a, b) connect transformations a and b for which aâ1b is the transformation corresponding to a single move (under the standard Rubikâs Square move model or under the STM or SQTM move model). It was shown in [3] that the diameter of this graph is Î( n2 log n ). This means that any achievable transformation of the puzzle (any transformation in RSn or RCn) can be reached using a polynomial p(n) number of moves. | 1706.06708#22 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 22 | Finally, we have also investigated a simpliï¬cation of the original NetVLAD architecture that averages the actual descriptors instead of residuals, as ï¬rst proposed by [47]. We call this variant NetRVLAD (for Residual-less VLAD). This simpliï¬cation requires less parameters and computing operations (about half in both cases). The NetRVLAD descriptor can be written as
» ay, (x; )x RVLAD(j,k) = a; (j). (10)
More information about our Tensorï¬ow [48] implementation of these different aggregation models can be found at: https://github. com/antoine77340/LOUPE
5 EXPERIMENTS This section evaluates alternative architectures for video aggrega- tion and presents results on the Youtube-8M [19] dataset.
# 5.1 Youtube-8M Dataset | 1706.06905#22 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06978 | 22 | Base model obtains a fixed-length representation vector of user interests by pooling all the embedding vectors over the user behav- ior feature group, as Eq.(1). This representation vector stays the same for a given user, in regardless of what candidate ads are. In this way, the user representation vector with a limited dimension will be a bottleneck to express userâs diverse interests. To make it capable enough, an easy method is to expand the dimension of embedding vector, which unfortunately will increase the size of learning parameters heavily. It will lead to overfitting under limited
H H wy (A) = f(VA, C1, â¬25 «1 CH) = » a(ej, vale; = » wjej, (3) j=l j=l
where {e1, e2, ..., eH } is the list of embedding vectors of behaviors of user U with length of H , vA is the embedding vector of ad A. In this way, vU (A) varies over different ads. a(·) is a feed-forward network with output as the activation weight, as illustrated in Fig.2. Apart from the two input embedding vectors, a(·) adds the out
4
product of them to feed into the subsequent network, which is an explicit knowledge to help relevance modeling. | 1706.06978#22 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 23 | Using this fact, we can build an NP algorithm solving the (Group) STM/SQTM Rubikâs Cube and the (Group) Rubikâs Square problems. In these problems, we are given k and either a starting conï¬guration or a transformation, and we are asked whether it is possible to solve the conï¬gura- tion/invert the transformation in at most k moves. The NP algorithm can nondeterministically make min(k, p(n)) moves and simply check whether this move sequence inverts the given transfor- mation or solves the given puzzle conï¬guration. | 1706.06708#23 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 23 | # 5.1 Youtube-8M Dataset
The Youtube-8M dataset [19] is composed of approximately 8 millions videos. Because of the large scale of the dataset, visual and audio features are pre-extracted and provided with the dataset. Each video is labeled with one or multiple tags referring to the main topic of the video. Figure 5 illustrates examples of videos with their annotations. The original dataset is divided into training, validation and test subsets with 70%, 20% and 10% of videos, respectively. In this work we keep around 20K videos for the validation, the remaining samples from the original training and validation subsets are used for training. This choice was made to obtain a larger training set and to decrease the validation time. We have noticed that the performance on our validation set was comparable (0.2%-0.3% higher) to the test performance evaluated on the Kaggle platform. As we have no access to
4 | 1706.06905#23 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 23 | z =h-+hâ/2. Each virtual object configuration represents a possible center of mass for the objects when sitting at location %;,y; over the virtual table. For each such configuration C = (xi, yi, 2), k grasping poses Az, are defined from which an object at (x;, yi, z) could be grasped, and a motion planner (Movelt) is called to compute kâ arm trajectories for reaching each such grasping pose Aâ through kâ different waypoints from a fixed resting pose and the robot base fixed at Bo. This means that up to k x kâ arm trajectories are computed for each virtual object configuration, resulting in up to D x k x kâ arm trajectories in total and up to k x D grasping poses. For each reachable grasping pose A7,, we store the pair (AZ, C) in a hash table. The table captures the function vplace that maps grasping poses (called arm configurations here), into virtual object configurations. The meaning of vplace(A) = C is that when the robot base is at Bo in front the virtual table and the arm configuration is A, an object on the gripper will be placed at the virtual object configuration C. | 1706.06927#23 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 23 | 4
product of them to feed into the subsequent network, which is an explicit knowledge to help relevance modeling.
Local activation unit of Eq.(3) shares similar ideas with attention methods which are developed in NMT task[1]. However different from traditional attention method, the constraint of >); w; = 1 is relaxed in Eq.(3), aiming to reserve the intensity of user interests. That is, normalization with softmax on the output of a(-) is aban- doned. Instead, value of 5); wi is treated as an approximation of the intensity of activated user interests to some degree. For exam- ple, if one userâs historical behaviors contain 90% clothes and 10% electronics. Given two candidate ads of T-shirt and phone, T-shirt activates most of the historical behaviors belonging to clothes and may get larger value of vy (higher intensity of interest) than phone. Traditional attention methods lose the resolution on the numerical scale of vy by normalizing of the output of a(-). | 1706.06978#23 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 24 | If any branch accepts, then certainly the answer to the problem is âyesâ (since that branchâs chosen sequence of moves is a solving/inverting sequence of moves of length at most k). On the other hand, if there is a solving/inverting sequence of moves of length at most k, then there is also one that has length both at most k and at most p(n). This is because p(n) is an upper bound on the diameter of the graph described above. Thus, if the answer to the problem is âyesâ, then there exists a solving/inverting sequence of moves of length at most min(k, p(n)), and so at least one branch accepts. As desired, the algorithm described is correct. Therefore, we have established membership in NP for the problems in question.
# 3 Hamiltonicity variants
To prove the problems introduced above hard, we need to introduce several variants of the Hamil- tonian cycle and path problems.
It is shown in [5] that the following problem is NP-complete.
Problem 5. A square grid graph is a ï¬nite induced subgraph of the inï¬nite square lattice. The Grid Graph Hamiltonian Cycle problem asks whether a given square grid graph with no degree-1 vertices has a Hamiltonian cycle. | 1706.06708#24 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
1706.06905 | 24 | 4
Method GAP Average pooling + Logistic Regression 71.4% 74.1% Average pooling + MoE + CG LSTM (2 Layers) GRU (2 Layers) 81.7% 82.0% BoW (4096 Clusters) NetFV (128 Clusters) NetRVLAD (256 Clusters) NetVLAD (256 Clusters) 81.6% 82.2% 82.3% 82.4% Gated BoW (4096 Clusters) Gated NetFV (128 Clusters) Gated NetRVLAD (256 Clusters) Gated NetVLAD (256 Clusters) 82.0% 83.0% 83.1% 83.2%
TABLE 1: Performance comparison for individual aggregation schemes. Clustering-based methods are compared with and with- out Context Gating.
the test labels, most results in this section are reported for our validation set. We report evaluation using the Global Average Precision (GAP) metric at top 20 as used in the Youtube-8M Kaggle competition (more details about the metric can be found at: https://www.kaggle.com/c/youtube8m#evaluation).
# 5.2 Implementation details | 1706.06905#24 | Learnable pooling with Context Gating for video classification | Current methods for video analysis often extract frame-level features using
pre-trained convolutional neural networks (CNNs). Such features are then
aggregated over time e.g., by simple temporal averaging or more sophisticated
recurrent neural networks such as long short-term memory (LSTM) or gated
recurrent units (GRU). In this work we revise existing video representations
and study alternative methods for temporal aggregation. We first explore
clustering-based aggregation layers and propose a two-stream architecture
aggregating audio and visual features. We then introduce a learnable non-linear
unit, named Context Gating, aiming to model interdependencies among network
activations. Our experimental results show the advantage of both improvements
for the task of video classification. In particular, we evaluate our method on
the large-scale multi-modal Youtube-8M v2 dataset and outperform all other
methods in the Youtube 8M Large-Scale Video Understanding challenge. | http://arxiv.org/pdf/1706.06905 | Antoine Miech, Ivan Laptev, Josef Sivic | cs.CV | Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under
review for TPAMI | null | cs.CV | 20170621 | 20180305 | [
{
"id": "1502.03167"
},
{
"id": "1602.07261"
},
{
"id": "1706.05150"
},
{
"id": "1609.08675"
},
{
"id": "1706.06905"
},
{
"id": "1603.04467"
},
{
"id": "1706.04572"
},
{
"id": "1707.00803"
},
{
"id": "1612.08083"
},
{
"id": "1707.04555"
},
{
"id": "1709.01507"
}
] |
1706.06927 | 24 | The arm graph has as nodes the arm conï¬gurations A that represent reachable grasping poses A = Aj C in relation to some virtual object conï¬guration C, in addition to the resting arm conï¬guration. The arm trajectories that connect the resting arm conï¬guration A0 with an arm trajectory A provide the edge in the arm graph between A0 and A. The graph contains also the inverse edges that correspond to the same trajectories reversed. Grasping conï¬gurations that are not reachable with any trajectory from the resting arm conï¬guration are pruned and virtual object conï¬gurations all of whose grasping poses have been pruned, are pruned as well. | 1706.06927#24 | Combined Task and Motion Planning as Classical AI Planning | Planning in robotics is often split into task and motion planning. The
high-level, symbolic task planner decides what needs to be done, while the
motion planner checks feasibility and fills up geometric detail. It is known
however that such a decomposition is not effective in general as the symbolic
and geometrical components are not independent. In this work, we show that it
is possible to compile task and motion planning problems into classical AI
planning problems; i.e., planning problems over finite and discrete state
spaces with a known initial state, deterministic actions, and goal states to be
reached. The compilation is sound, meaning that classical plans are valid robot
plans, and probabilistically complete, meaning that valid robot plans are
classical plans when a sufficient number of configurations is sampled. In this
approach, motion planners and collision checkers are used for the compilation,
but not at planning time. The key elements that make the approach effective are
1) expressive classical AI planning languages for representing the compiled
problems in compact form, that unlike PDDL make use of functions and state
constraints, and 2) general width-based search algorithms capable of finding
plans over huge combinatorial spaces using weak heuristics only. Empirical
results are presented for a PR2 robot manipulating tens of objects, for which
long plans are required. | http://arxiv.org/pdf/1706.06927 | Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner | cs.RO, cs.AI | 10 pages, 2 figures | null | cs.RO | 20170621 | 20170621 | [] |
1706.06978 | 24 | We have tried LSTM to model user historical behavior data in the sequential manner. But it shows no improvement. Different from text which is under the constraint of grammar in NLP task, the sequence of user historical behaviors may contain multiple concurrent interests. Rapid jumping and sudden ending over these interests causes the sequence data of user behaviors to seem to be noisy. A possible direction is to design special structures to model such data in a sequence way. We leave it for future research.
5 TRAINING TECHNIQUES In the advertising system in Alibaba, numbers of goods and users scale up to hundreds of millions. Practically, training industrial deep networks with large scale sparse input features is of great challenge. In this section, we introduce two important techniques which are proven to be helpful in practice. | 1706.06978#24 | Deep Interest Network for Click-Through Rate Prediction | Click-through rate prediction is an essential task in industrial
applications, such as online advertising. Recently deep learning based models
have been proposed, which follow a similar Embedding\&MLP paradigm. In these
methods large scale sparse input features are first mapped into low dimensional
embedding vectors, and then transformed into fixed-length vectors in a
group-wise manner, finally concatenated together to fed into a multilayer
perceptron (MLP) to learn the nonlinear relations among features. In this way,
user features are compressed into a fixed-length representation vector, in
regardless of what candidate ads are. The use of fixed-length vector will be a
bottleneck, which brings difficulty for Embedding\&MLP methods to capture
user's diverse interests effectively from rich historical behaviors. In this
paper, we propose a novel model: Deep Interest Network (DIN) which tackles this
challenge by designing a local activation unit to adaptively learn the
representation of user interests from historical behaviors with respect to a
certain ad. This representation vector varies over different ads, improving the
expressive ability of model greatly. Besides, we develop two techniques:
mini-batch aware regularization and data adaptive activation function which can
help training industrial deep networks with hundreds of millions of parameters.
Experiments on two public datasets as well as an Alibaba real production
dataset with over 2 billion samples demonstrate the effectiveness of proposed
approaches, which achieve superior performance compared with state-of-the-art
methods. DIN now has been successfully deployed in the online display
advertising system in Alibaba, serving the main traffic. | http://arxiv.org/pdf/1706.06978 | Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai | stat.ML, cs.LG, I.2.6; H.3.2 | Accepted by KDD 2018 | null | stat.ML | 20170621 | 20180913 | [
{
"id": "1704.05194"
}
] |
1706.06708 | 25 | Starting with this problem, we prove that the following promise version of the grid graph Hamiltonian path problem is also NP-hard.
Problem 6. The Promise Grid Graph Hamiltonian Path problem takes as input a square grid graph G and two speciï¬ed vertices s and t with the promise that any Hamiltonian path in G has s and t as its start and end respectively. The problem asks whether there exists a Hamiltonian path in G.
6
The above problem is more useful, but it is still inconvenient in some ways. In particular, there is no conceptually simple way to connect a grid graph to a Rubikâs Square or Rubikâs Cube puzzle. It is the case, however, that every grid graph is actually a type of graph called a âcubical graphâ. Cubical graphs, unlike grid graphs, can be conceptually related to Rubikâs Cubes and Rubikâs Squares with little trouble.
So what is a cubical graph? Let Hm be the m dimensional hypercube graph; in particular, the vertices of Hm are the bitstrings of length m and the edges connect pairs of bitstrings whose Hamming distance is exactly one. Then a cubical graph is any induced subgraph of any hypercube graph Hm. | 1706.06708#25 | Solving the Rubik's Cube Optimally is NP-complete | In this paper, we prove that optimally solving an $n \times n \times n$
Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in
square grid graphs. This improves the previous result that optimally solving an
$n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We
prove this result first for the simpler case of the Rubik's Square---an $n
\times n \times 1$ generalization of the Rubik's Cube---and then proceed with a
similar but more complicated proof for the Rubik's Cube case. | http://arxiv.org/pdf/1706.06708 | Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy | cs.CC, cs.CG, math.CO, F.1.3 | 35 pages, 8 figures | null | cs.CC | 20170621 | 20180427 | [] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.