doi
stringlengths
10
10
chunk-id
int64
0
936
chunk
stringlengths
401
2.02k
id
stringlengths
12
14
title
stringlengths
8
162
summary
stringlengths
228
1.92k
source
stringlengths
31
31
authors
stringlengths
7
6.97k
categories
stringlengths
5
107
comment
stringlengths
4
398
journal_ref
stringlengths
8
194
primary_category
stringlengths
5
17
published
stringlengths
8
8
updated
stringlengths
8
8
references
list
1706.06905
25
# 5.2 Implementation details In the Youtube 8M competition dataset [19] video and audio features are provided for every second of the input video. The visual features consist of ReLU activations of the last fully- connected layer from a publicly available2 Inception network trained on Imagenet. The audio features are extracted from a CNN architecture trained for audio classification [49]. PCA and whitening are then applied to reduce the dimension to 1024 for the visual features and 128 for the audio features. More details on feature extraction are available in [19]. All of our models are trained using the Adam algorithm [50] and mini-batches with data from around 100 videos. The learning rate is initially set to 0.0002 and is then decreased exponentially with the factor of 0.8 every 4M samples. We use gradient clipping and batch normalization [51] before each non-linear layer.
1706.06905#25
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
25
The base graph is computed by sampling a number of con- figurations NB near the tables and calling the MoveIt motion planner to connect each such configuration with up to kB of its closest neighbours. The number of robot configurations results from the product of the number of arm configurations k × D and the number of base configurations NB. In the experiments we consider numbers that go from tens to a few hundred and which thus result into thousands of possible robot configurations. The computation of the base and arm graphs defines the procedures used in the M oveBase and M oveArm actions that access the source and target configuration of each graph edge.
1706.06927#25
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
25
5.1 Mini-batch Aware Regularization Overfitting is a critical challenge for training industrial networks. For example, with addition of fine-grained features, such as fea- tures of goods_ids with dimensionality of 0.6 billion (including visited_дoods_ids of user and дoods_id of ad as described in Ta- ble 1), model performance falls rapidly after the first epoch during training without regularization, as the dark green line shown in Fig.4 in later section 6.5. It is not practical to directly apply tradi- tional regularization methods, such as ℓ2 and ℓ1 regularization, on training networks with sparse inputs and hundreds of millions of parameters. Take ℓ2 regularization as an example. Only parameters of non-zero sparse features appearing in each mini-batch needs to be updated in the scenario of SGD based optimization methods without regularization. However, when adding ℓ2 regularization it needs to calculate L2-norm over the whole parameters for each mini-batch, which leads to extremely heavy computations and is unacceptable with parameters scaling up to hundreds of millions. p(s) * pls) 1 ' ee ol 7 1“ F) PReLU Dice Figure 3: Control function of PReLU and Dice. 5
1706.06978#25
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
26
Notably, when embedding a grid graph into a hypercube, it is always possible to assign the bitstring label 00 . . . 0 to any vertex. Suppose we start with Promise Grid Graph Hamiltonian Path problem instance (G, s, t); then by embedding G into a hypercube graph, we can reinterpret this instance as an instance of the promise version of cubical Hamiltonian path: Problem 7. The Promise Cubical Hamiltonian Path problem takes as input a cubical graph whose vertices are length-m bitstrings l1, l2, . . . , ln with the promise that (1) ln = 00 . . . 0 and (2) any Hamiltonian path in the graph has l1 and ln as its start and end respectively. The problem asks whether there exists a Hamiltonian path in the cubical graph. In other words, the problem asks whether it is possible to rearrange bitstrings l1, . . . , ln into a new order such that each bitstring has Hamming distance one from the next. In the remainder of this section, we prove that Problems 6 and 7 are NP-hard. # 3.1 Promise Grid Graph Hamiltonian Path is NP-hard First, we reduce from the Grid Graph Hamiltonian Cycle problem to the Promise Grid Graph Hamiltonian Path problem. Lemma 3.1. The Promise Grid Graph Hamiltonian Path problem (Problem 6) is NP-hard.
1706.06708#26
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
26
For the clustering-based pooling models, i.e. BoW, NetVLAD, NetRVLAD and NetFV, we randomly sample N features with replacement from each video. N is fixed for all videos at training and testing. As opposed to the original version of NetVLAD [23], we did not pre-train the codebook with a k-means initialization as we did not notice any improvement by doing so. For training of recurrent models, i.e. LSTM and GRU, we process features in the temporal order. We have also experimented with the random sam- pling of frames for LSTM and GRU which performs surprisingly similarly. All our models are trained with the cross entropy loss. Our im- plementation uses the TensorFlow framework [48]. Each training is performed on a single NVIDIA TITAN X (12Gb) GPU. # 5.3 Model evaluation We evaluate the performance of individual models in Table 1. To enable a fair comparison, all pooled representations have the same size of 1024 dimensions. The “Gated” versions for the clustering-based pooling methods include CG layers as described 2. https://www.tensorflow.org/tutorials/image recognition
1706.06905#26
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
26
The set of (real) object configurations are then defined and computed as follows. The virtual object configuration C = (x,y, Z) represents the 3D position of the object before a pick up or after a place action, with the arm at configuration A and the robot base at the virtual base configuration By = (0,0, 0). As the robot moves from this “virtual” base to an arbitrary base B in the base graph, the point C’ determined by the same arm configuration A moves to a new point C’ that is given by a transformation T'g(C) of C that depends solely on B. Indeed, if B = Bo + (Ax, Ay, Ag) with Ag = 0, then Tp(C) = (x + Ax,y + Ay, z). More generally, for any Ag, Tp(C) = (2',y',z) with a’ = Ax + (# — Ax)cos(Ag) — (y — Ay)sin(Ag) and y’ = Ay + (a — Ax) sin(Ag) + (y — Ay )cos(Ag). The set of actual object configurations is then given by such triplets Tg(C) = (2’, y’, z) for which 1) B is a
1706.06927#26
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
26
p(s) * pls) 1 ' ee ol 7 1“ F) PReLU Dice Figure 3: Control function of PReLU and Dice. 5 In this paper, we introduce an efficient mini-batch aware reg- ularizer, which only calculates the L2-norm over the parameters of sparse features appearing in each mini-batch and makes the computation possible. In fact, it is the embedding dictionary that contributes most of the parameters for CTR networks and arises the difficulty of heavy computation. Let W ∈ RD×K denote parameters of the whole embedding dictionary, with D as the dimensionality of the embedding vector and K as the dimensionality of feature space. Expand the ℓ2 regularization on W over samples 22 wy 12 LF *D) nn 12(W) = IIWI =D llwl= lw ylB, @ n jl (x, y)eS J=1 J where wj € R? is the j-th embedding vector, I(x; # 0) denotes if the instance x has the feature id j, and nj denotes the number of occurrence for feature id j in all samples. Eq.(4) can be transformed into Eq.(5) in the mini-batch aware manner
1706.06978#26
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
27
Lemma 3.1. The Promise Grid Graph Hamiltonian Path problem (Problem 6) is NP-hard. Proof: Consider an instance G of the Grid Graph Hamiltonian Cycle problem. Consider the vertices in the top row of G and let the leftmost vertex in this row be u. wu has no neighbors on its left or above it, so it must have a neighbor to its right (since G has no degree-1 vertices). Let that vertex be u’. We can add vertices to G above u and wu’ as shown in figure [3] to obtain new grid graph G’ in polynomial time. Note that two of the added vertices are labeled v and v’. Also note that the only edges that are added are those shown in the figure since no vertices in G are above u. u u' Figure 3: The vertices added to G to obtain G’. First notice that (G’,v,v’) is a valid instance of the Promise Grid Graph Hamiltonian Path problem. In particular, (G’,v,v’) satisfies the promise—any Hamiltonian path in G’ must have v and v’ as endpoints—since both v and v’ have degree-1. 7
1706.06708#27
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
27
2. https://www.tensorflow.org/tutorials/image recognition After pooling After MoE GAP 82.2% 82.4% 82.7% Gated Linear Unit Context Gating 82.7% Context Gating 83.0% - Gated Linear Unit Context Gating - - - Context Gating TABLE 2: Context Gating ablation study. There is no GLU layer after MoE as GLU does not output probabilities. Method NetVLAD NetFV GRU LSTM 81.9% 81.2% 82.2% 81.7% 82.4% 82.2% 82.1% 81.1% TABLE 3: Evaluation of audio-video fusion methods (Early and Late Concat). in Section 3.1. Using CG layers together with GRU and LSTM has decreased the performance in our experiments. From Table 1 we can observe a significant increase of perfor- mance provided by all learnt aggregation schemes compared to the Average pooling baselines. Interestingly, the NetVLAD and NetFV representations based on the temporally-shuffled feature pooling outperforms the temporal models (GRU and LSTM). Finally, we can note a consistent increase in performance provided by the Context Gating for all clustering-based pooling methods. # 5.4 Context Gating ablation study
1706.06905#27
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
27
node of the base graph, 2) C is a virtual object configuration, and 3) the 2D point z’,y’ falls within a table in the actual environment. That is, while the virtual object configurations live only in the virtual table with the base fixed at Bo, the actual object configurations depend on the virtual object configurations, the base configurations, and the real tables in the working space. We will write T3(C) = L when C and B are such that for Tg(C) = (2’,y', 2’), the 2D point 2’, x does not fall within a table in the actual environment. In such a case, T’3(C) doesn’t denote an actual object configuration. Given the linear transformation Tg, and the function vpose(A) defined above, that maps an arm configuration into a virtual object configuration that is relative to the virtual base Bo, the procedures denoted by the symbols @graspable, @placeable, and @pose in the planning encoding are defined as follows: @pose(B, A) = C’ @graspable(B, A,C’) = true @placeable(B, A) = true iff C’ = Tp(vpose(A)) iff C’ = @pose(B, A) iff @pose(B, A)AL.
1706.06927#27
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
27
L(W) = & So I(x; #0) 2 5 aw)= D1 >) Dd) So Ivyll, (8) j=1 m=1 (x, yEBm J j=1 m=1 (x, yEBm J where B denotes the number of mini-batches, 8, denotes the m-th mini-batch. Let amj = maxx, y)eg,, [xj # 0) denote if there is at least one instance having the feature id j in mini-batch B;,. Then Eq.(5) can be approximated by K B LAW) = >" >) j amj 2 a7 lw ylld- (6) jai ma In this way, we derive an approximated mini-batch aware version of ℓ2 regularization. For the m-th mini-batch, the gradient w.r.t. the embedding weights of feature j is 1 OLX), Y) 4 @mni Wi TWiT 1Bml wj (7) . ead (x, yEBm ow " in which only parameters of features appearing in m-th mini-batch participate in the computation of regularization. 5.2 Data Adaptive Activation Function PReLU [12] is a commonly used activation function
1706.06978#27
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
28
7 Below we show that (G’, v,v’) is a “yes” instance to the Promise Grid Graph Hamiltonian Path problem (i.e., G’ has a Hamiltonian path) if and only if G is a “yes” instance to the Grid Graph Hamiltonian Cycle problem (i.e., G has a Hamiltonian cycle). First suppose G contains a Hamiltonian cycle. This cycle necessarily contains edge (u, wu’) ecause u has only two neighbors; removing this edge yields a Hamiltonian path from wu’ to u in G. This path can be extended by adding paths from v’ to u’ and from u to v into a Hamiltonian path in G’ from v’ to v. On the other hand, suppose G’ has a Hamiltonian path. Such a path must have v and v’ as the wo endpoints, and it is easy to show that the two short paths between u and v and between u! and v’ must be the start and end of this path. In other words, if G’ has a Hamiltonian path, then he central part of this path is a Hamiltonian path in G’ between u and u’. Adding edge (u, wu’), we obtain a Hamiltonian cycle in G.
1706.06708#28
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
28
# 5.4 Context Gating ablation study Table 2 reports an ablation study evaluating the effect of Context Gating on the NetVLAD aggregation with 128 clusters. The addi- tion of CG layers in the feature pooling and classification modules gives a significant increase in GAP. We have observed a similar behavior for NetVLAD with 256 clusters. We also experimented with replacing the Context Gating by the GLU [39] after pooling. To make the comparison fair, we added a Context Gating layer just after the MoE. Despite being less complex than GLU, we observe that CG also performs better. We note that the improvement of 0.8% provided by CG is similar to the improvement of the best non-gated model (NetVLAD) over LSTM in Table 1. # 5.5 Video-Audio fusion
1706.06905#28
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
28
We are left to specify the compilation of the tables required for computing the @nonoverlap procedure without calling a collision checker at planning time. This procedure is used in the state constraints @nonoverlap(B, Traj, Conf(o),Hold)) for ruling out actions that move the arm along a trajectory T’raj such that for the current base configuration B and content of the gripper Hold, will cause a collision with some object o in its current configuration Conf(o). For doing these tests at planning time efficient, we precompile two additional tables, called the holding and non-holding overlap tables (HT, NT), which are made of pairs (T'r,C) where Tr is a trajectory in the arm graph, and C’ is what we will call a relative object configuration different than both the virtual and real object configuration. Indeed, the set of relative object configurations is defined as the set of configurations Tz (C) for all bases B and all real object configurations C, where T, | is the inverse of the linear transformation Tg above. If C' is a real 3D point obtained by mapping a point C’ in the virtual table after the robot base changes from Bo to B, then C” =
1706.06927#28
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
28
5.2 Data Adaptive Activation Function PReLU [12] is a commonly used activation function f (s) = s α s if s > 0 if s ≤ 0. = p(s) · s + (1 − p(s)) · α s, (8) where s is one dimension of the input of activation function f (·) and p(s) = I (s > 0) is an indicator function which controls f (s) to switch between two channels of f (s) = s and f (s) = αs. α in the second channel is a learning parameter. Here we refer to p(s) as the control function. The left part of Fig.3 plots the control function of PReLU. PReLU takes a hard rectified point with value of 0, which may be not suitable when the inputs of each layer follow different distributions. Take this into consideration, we design a novel data adaptive activation function named Dice, f (s) = p(s) · s + (1 − p(s)) · αs, p(s) = 1 + e 1 − s −E[s ] √ V ar [s ]+ϵ (9)
1706.06978#28
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
29
By the above reduction, the Promise Grid Graph Hamiltonian Path problem is NP-hard. # 3.2 Promise Cubical Hamiltonian Path is NP-hard Second, we reduce from the Promise Grid Graph Hamiltonian Path problem to the Promise Cubical Hamiltonian Path problem. Theorem 3.2. The Promise Cubical Hamiltonian Path problem (Problem 7) is NP-hard. Proof: Consider an instance (G, s, t) of the Promise Grid Graph Hamiltonian Path problem. Suppose G has mr rows and mc columns and n vertices. Assign a bitstring label to each row and a bitstring label to each column. In particular, let the row labels from left to right be the following length mr − 1 bitstrings: 000 . . . 0, 100 . . . 0, 110 . . . 0, . . ., and 111 . . . 1. Similarly, let the column labels from top to bottom be the following length mc −1 bitstrings: 000 . . . 0, 100 . . . 0, 110 . . . 0, . . ., and 111 . . . 1. Then assign each vertex a bitstring label of length m = mr + mc − 2 consisting of the concatenation of its row label followed by its column label.
1706.06708#29
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
29
# 5.5 Video-Audio fusion In addition to the late fusion of audio and video streams (Late Concat) described in Section 3, we have also experimented with a simple concatenation of original audio and video features into a single vector, followed by the pooling and classification modules in a “single stream manner” (Early Concat). Results in Table 3 illustrate the effect of the two fusion schemes for different pooling methods. The two-stream audio-visual architecture with the late fusion improves performance for the clustering-based pooling methods (NetVLAD and NetFV). On the other hand, the early fusion scheme seems to work better for GRU and LSTM aggrega- tions. We have also experimented with replacing the concatenation fusion of audio-video features by their outer product. We found this did not work well compared to the concatenation mainly due to the high dimensionality of the resulting output. To alleviate this issue, we tried to reduce the output dimension using the multi-modal compact bilinear pooling approach [52] but found the resulting models underfitting the data. 5 85, — Gated NetVLAD — NetVLAD — LSTM — Average pooling 80 75 GAP 70 65 10° 10° 10” Number of samples Fig. 4: The GAP performance of the different main models when varying the dataset size.
1706.06905#29
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
29
If C' is a real 3D point obtained by mapping a point C’ in the virtual table after the robot base changes from Bo to B, then C” = Tg/(C) for B’ = B is just C’ but for B’ F B, it denotes a point in the “virtual” space relative to the base Bo that may not correspond to a virtual object configuration, and may even fail to be in the space of the virtual table (the local space of the robot when fixed at base Bo). Relative object configurations C”’ that do not fall within the virtual table, are pruned. The holding overlap table (HT) contains then the pair (T'r,C)) for a trajectory T'r and a relative object configuration, iff the robot arm moving along trajectory T’r will collide with an object in the virtual configuration C when the robot base is at Bo and the gripper is carrying an object. Similarly, the pair (T’r, C) belongs to the non-holding overlap table (NT) iff the same condition arises when the gripper is empty. Interestingly, each of these two tables is compiled by calling a collision checker (Movelt) a number of times that is given by the total number of arm
1706.06927#29
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
29
with the control function to be plotted in the right part of Fig.3. In the training phrase, E[s] and V ar [s] is the mean and variance of input in each mini-batch. In the testing phrase, E[s] and V ar [s] is calculated by moving averages E[s] and V ar [s] over data. ϵ is a small constant which is set to be 10−8 in our practice. Dice can be viewed as a generalization of PReLu. The key idea of Dice is to adaptively adjust the rectified point according to dis- tribution of input data, whose value is set to be the mean of input. Besides, Dice controls smoothly to switch between the two channels. When E(s) = 0 and V ar [s] = 0, Dice degenerates into PReLU. 6 EXPERIMENTS In this section, we present our experiments in detail, including datasets, evaluation metric, experimental setup, model comparison and the corresponding analysis. Experiments on two public datasets with user behaviors as well as a dataset collected from the display advertising system in Alibaba demonstrate the effectiveness of proposed approach which outperforms state-of-the-art methods on the CTR prediction task. Both the public datasets and experiment codes are made available1.
1706.06978#29
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
30
Consider any two vertices. Their labels have Hamming distance one if and only if the vertices’ column labels are the same and their row labels have Hamming distance one, or visa versa. By construction, two row/column labels are the same if and only if the two rows/columns are the same and they have Hamming distance one if and only if the two rows/columns are adjacent. Thus two vertices’ labels have Hamming distance one if and only if the two vertices are adjacent in G. In other words, we have expressed G as a cubical graph by assigning these bitstring labels to the vertices of G. In particular, suppose the vertices of G are v1, V2,...,Un With vj = s and vy, = t. Let Ul be the label of v;. Then the bitstrings l/,15,...,1/, specify the cubical graph that is G. Ul be the label of v;. Then the bitstrings Define |; = 1, @ I’. Under this definition,
1706.06708#30
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06978
30
6.1 Datasets and Experimental Setup Amazon Dataset2. Amazon Dataset contains product reviews and metadata from Amazon, which is used as benchmark dataset[13, 18, 23]. We conduct experiments on a subset named Electronics, which contains 192,403 users, 63,001 goods, 801 categories and 1,689,188 samples. User behaviors in this dataset are rich, with more than 5 reviews for each users and goods. Features include goods_id, cate_id, user reviewed goods_id_list and cate_id_list. Let all behaviors of a user be (b1, b2, . . . , bk , . . . , bn ), the task is to predict the (k+1)-th reviewed goods by making use of the first k reviewed goods. Training dataset is generated with k = 1, 2, . . . , n − 2 for each user. In the test set, we predict the last one given the first n − 1 reviewed goods. For all models, we use SGD as the optimizer with exponential decay, in which learning rate starts at 1 and decay rate is set to 0.1. The mini-batch size is set to be 32.
1706.06978#30
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
31
Ul be the label of v;. Then the bitstrings Define |; = 1, @ I’. Under this definition, Define |; = 1, @ I’. Under this definition, the Hamming distance between |; and 1; is the same as the Hamming distance between I! and Ui. Therefore J; has Hamming distance one from 1; if and only if v; and v; are adjacent. Thus, the cubical graph specified by bitstrings l1,...,Jn is also G. Note that the J; bitstrings can be computed in polynomial time. We claim that 1;,...,l, is a valid instance of Promise Cubical Hamiltonian Path, i.e., this instance satisfies the promise of the problem. The first promise is that J, = 00...0; by definition, l, =U, @U, = 00...0. The second promise is that any Hamiltonian path in the cubical graph specified by ly, lo,..., [, has l; and I, as its start and end. Note that the cubical graph specified by Uy,l2,..., [, is the graph G with vertex 1; in the cubical graph corresponding to vertex vu; in G. In other words, the promise requested is that any Hamiltonian path in G must start and end 8
1706.06708#31
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
31
One valuable feature of the Youtube-8M dataset is the large scale of annotated data (almost 10 millions videos). More common annotated video datasets usually have sizes several orders of mag- nitude lower, ranging from 10k to 100k samples. With the large- scale dataset at hand we evaluate the influence of the amount of training data on the performance of different models. To this end, we experimented with training different models: Gated NetVLAD, NetVLAD, LSTM and average pooling based model on multiple randomly sampled subsets of the Youtube 8M dataset. We have experimented with subsets of 70K, 150K, 380K and 1150K samples. For each subset size, we have trained models using three non-overlapping training subsets and measured the variance in performance. Figure 4 illustrates the GAP performance of each model when varying the training size. The error bars represent the variance observed when training the models on the three different training subsets. We have observed low and consistent GAP variance for different models and training sizes. Despite the LSTM model has less parameters (around 40M) compared to NetVLAD (around 160M) and Gated NetVLAD (around 180M), NetVLAD and
1706.06905#31
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
31
The procedure @nonoverlap(B,Tr,Conf(o),Hold) checks whether trajectory Tr collides with object o in configuration Conf (0) when the robot base is B. If Hold is None, this is checked by testing whether the pair (Tr, T‘(Conf(o))) is in the NT table, and if Hold is not None, by testing whether the pair is in the HT table. These are lookup operations in the two (hash) tables NT and HT, whose size is determined by the number of trajectories and the number of relative object configurations. This last number is independent of the number of objects but higher than the number of virtual configurations. In the worst case, it is bounded by the product of the number Vz of robot bases and the number of real object configurations, which in turn is bounded by Ng x No, where No is the number of virtual object configurations. Usually, however, the number of entries in the overlap tables NT and HT is much less, as for most real object configurations Cand base B, the point Tz (C) does not fall into the “virtual table” that defines the local space of the robot when fixed at Bo. The size of the hash table (T'r,C’) precompiled for encoding the function
1706.06927#31
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
31
MovieLens Dataset3. MovieLens data[11] contains 138,493 users, 27,278 movies, 21 categories and 20,000,263 samples. To make it suitable for CTR prediction task, we transform it into a binary clas- sification data. Original user rating of the movies is continuous value ranging from 0 to 5. We label the samples with rating of 4 and 5 to be positive and the rest to be negative. We segment the data into training and testing dataset based on userID. Among all 138,493 users, of which 100,000 are randomly selected into training set (about 14,470,000 samples) and the rest 38,493 into the test set (about 5,530,000 samples). The task is to predict whether user will rate a given movie to be above 3(positive label) based on histori- cal behaviors. Features include movie_id, movie_cate_id and user rated movie_id_list, movie_cate_id_list. We use the same optimizer, learning rate and mini-batch size as described on Amazon Dataset. Alibaba Dataset. We collected traffic logs from the online dis- play advertising system in Alibaba, of which two weeks’ samples are used for training and samples of the following day for testing.
1706.06978#31
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
32
8 in vertices v1 = s and vn = t. This is guaranteed by the promise of the Promise Grid Graph Hamiltonian Path problem. Since G is the graph specified by 11, lo,...,Jn, the answer to the Promise Cubical Hamiltonian Path instance 11,l2,...,ln is the same as the answer to the Promise Grid Graph Hamiltonian Path instance (G,s,t). Thus, the procedure converting (G,s,t) into 1,,lo,...,ln, which runs in polynomial time, is a reduction proving that Promise Cubical Hamiltonian Path is NP-hard. # (Group) Rubik’s Square is NP-complete # 4.1 Reductions To prove that the Rubik’s Square and Group Rubik’s Square problems are NP-complete, we reduce from the Promise Cubical Hamiltonian Path problem of Section 3.2. Suppose we are given an instance of the Promise Cubical Hamiltonian Path problem consisting of n bitstrings l1, . . . , ln of length m (with ln = 00 . . . 0). To construct a Group Rubik’s Square instance we need to compute the value k indicating the allowed number of moves and construct the transformation t ∈ RSs.
1706.06708#32
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
32
the LSTM model has less parameters (around 40M) compared to NetVLAD (around 160M) and Gated NetVLAD (around 180M), NetVLAD and Gated NetVLAD models demonstrate better generalization than LSTM when trained from a lower number of samples. The Context Gating module still helps generalizing better the basic NetVLAD based architecture when having sufficient number of samples (at least 100k samples). We did not show results with smaller dataset sizes as the results for all models were drastically dropping down. This is mainly due to the fact that the task is a multi-label prediction problem with a large pool of roughly 5000 labels. As these labels have a long-tail distribution, decreasing the dataset size to less than 30k samples would leave many labels with no single training example. Thus, it would not be clear if the drop of performance is due to the aggregation technique or a lack of training samples for rare classes.
1706.06905#32
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
32
that defines the local space of the robot when fixed at Bo. The size of the hash table (T'r,C’) precompiled for encoding the function vpose(T'r) above is smaller and given just by the number of arm trajectories T’r, to the number of edges in the arm graph, which in turn is equal to 2x Dx k x k’, where D is the number of virtual object configurations, k is the number of grasping poses for each virtual object configuration, and k’ in the number of trajectories for reaching each grasping pose.
1706.06927#32
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
32
traffic logs from the online dis- play advertising system in Alibaba, of which two weeks’ samples are used for training and samples of the following day for testing. The size of training and testing set is about 2 billions and 0.14 billion respectively. For all the deep models, the dimensionality of embed- ding vector is 12 for the whole 16 groups of features. Layers of MLP is set by 192 × 200 × 80 × 2. Due to the huge size of data, we set the mini-batch size to be 5000 and use Adam[15] as the optimizer. We
1706.06978#32
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
33
The value k can be computed directly as k = 2n − 1. The transformation t will be an element of group RSs where s = 2(max(m, n) + 2n). Define ai for 1 ≤ i ≤ n to be (x1)(li)1 ◦ (x2)(li)2 ◦ · · · ◦ (xm)(li)m where (li)1, (li)2, . . . , (li)m are the bits of li. Also define bi = (ai)−1 ◦ yi ◦ ai for 1 ≤ i ≤ n. Then we define t to be a1 ◦ b1 ◦ b2 ◦ · · · ◦ bn. Outputting (t, k) completes the reduction from the Promise Cubical Hamiltonian Path problem to the Group Rubik’s Square problem. To reduce from the Promise Cubical Hamiltonian Path problem to the Rubik’s Square problem we simply output (Ct, k) = (t(C0), k). These reductions clearly run in polynomial time. # 4.2 Intuition
1706.06708#33
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
33
# 5.7 Ensembling We explore the complementarity of different models and con- sider their combination through ensembling. Our ensemble con- sists of several independently trained models. The ensembling Approach Ensemble size GAP Ours (Full) Ours (Light) 25 7 85.0 84.7 Wang et al. [53] Li et al. [54] Chen et al. [55] Skalic et al. [56] 75 57 134 75 84.6 84.5 84.2 84.2 TABLE 4: Ensemble model sizes of the top ranked teams (out of 655) from the Youtube 8M kaggle competition.
1706.06905#33
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
33
# V. PLANNING ALGORITHM The compilation of task and motion planning problems is efficient and results in planning problems that are compact. Yet, on the one hand, standard planners like FF and LAMA do not handle functions and state constraints, while planners that do compute heuristics that in this setting are not cost-effective [5]. For these reasons, we build instead on a different class of planning algorithm, called best-first width search (BFWS), that combines some of the benefits of the goal-directed heuristic search with those of width-based search [19].
1706.06927#33
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
33
# 2http://jmcauley.ucsd.edu/data/amazon/ 3https://grouplens.org/datasets/movielens/20m/ 6 # Table 2: Statistics of datasets used in this paper. Dataset Users Goodsa Categories Samples Amazon(Electro). MovieLens. Alibaba. 192,403 138,493 60 million 63,001 27,278 0.6 billion 801 21 100,000 1,689,188 20,000,263 2.14 billion a For MovieLens dataset, goods refer to be movies. apply exponential decay, in which learning rate starts at 0.001 and decay rate is set to 0.9. The statistics of all the above datasets is shown in Table 2. Volume of Alibaba Dataset is much larger than both Amazon and MovieLens, which brings more challenges. # 6.2 Competitors • LR[19]. Logistic regression (LR) is a widely used shallow model before deep networks for CTR prediction task. We implement it as a weak baseline. • BaseModel. As introduced in section4.2, BaseModel follows the Embedding&MLP architecture and is the base of most of subsequently developed deep networks for CTR modeling. It acts as a strong baseline for our model comparison.
1706.06978#33
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
34
# 4.2 Intuition The key idea that makes this reduction work is that the transformations bi for i ∈ {1, . . . , n} all commute. This allows us to rewrite t = a1 ◦ b1 ◦ b2 ◦ · · · ◦ bn with the bis in a different order. If the order we choose happens to correspond to a Hamiltonian path in the cubical graph specified by l1, . . . , ln, then when we explicitly write the bis and a1 in terms of xjs and yis, most of the terms cancel. In particular, the number of remaining terms will be exactly k. Since we can write t as a combination of exactly k xjs and yis, we can invert t using at most k xjs and yis. In other words, if there is a Hamiltonian path in the cubical graph specified by l1, . . . , ln, then (t, k) is a “yes” instance to the Group Rubik’s Square problem.
1706.06708#34
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
34
TABLE 4: Ensemble model sizes of the top ranked teams (out of 655) from the Youtube 8M kaggle competition. averages label prediction scores of selected models. We have observed the increased effect of ensembling when combining diverse models. To choose models, we follow a simple greedy approach: we start with the best performing model and choose the next model by maximizing the GAP of the ensemble on the validation set. Our final ensemble used in the Youtube 8M challenge contains 25 models. A seven models ensemble is enough to reach the first place with a GAP on the private test set of 84.688. These seven models correspond to: Gated NetVLAD (256 clusters), Gated NetFV (128 clusters), Gated BoW (4096 Clusters), BoW (8000 Clusters), Gated NetRVLAD (256 Clus- ters), GRU (2 layers, hidden size: 1200) and LSTM (2 layers, hidden size: 1024). Our code to reproduce this ensemble is avail- able at: https://github.com/antoine77340/Youtube-8M-WILLOW. To obtain more diverse models for the final 25 ensemble, we also added all the non-Gated models, varied the number of clusters or varied the size of the pooled representation.
1706.06905#34
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
34
Pure width-based search algorithms are exploration algo- rithms and do not rely on goal directed heuristics. The simplest such algorithm is IW(1), which is a plain breadth-first search where newly generated states that do not make an atom X = x true for the first time in the search are pruned. The algorithm IW(2) is similar except that a state s is pruned when there are no atoms X = x and Y = y such that the pair of atoms (X = x,Y = y) is true in s and false in all the states generated before s. More generally, the algorithm IW(k) is a normal breadth-first except that newly generated states s are pruned when their “novelty” is greater than k, where the novelty of s is i iff there is a tuple t of 7 atoms such that s is the first state in the search that makes all the atoms in ¢ true, with no tuple of smaller size having this property [19]. While simple, it has been shown that the procedure IW(k) manages to solve arbitrary instances of many of the standard benchmark domains in low polynomial time provided
1706.06927#34
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
34
• Wide&Deep[4]. In real industrial applications, Wide&Deep model has been widely accepted. It consists of two parts: i) wide model, which handles the manually designed cross product features, ii) deep model, which automatically ex- tracts nonlinear relations among features and equals to the BaseModel. Wide&Deep needs expertise feature engineering on the input of the "wide" module. We follow the practice in [10] to take cross-product of user behaviors and candidates as wide inputs. For example, in MovieLens dataset, it refers to the cross-product of user rated movies and candidate movies. • PNN[5]. PNN can be viewed as an improved version of BaseModel by introducing a product layer after embedding layer to capture high-order feature interactions. • DeepFM[10]. It imposes a factorization machines as "wide" module in Wide&Deep saving feature engineering jobs.
1706.06978#34
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
35
In order to more precisely describe the cancellation of terms in t, we can consider just one local part: bj 0 by. We can rewrite this as (a;)~! 0 y; 0 aj 0 (aj)! 0 yy o ay. The interesting part is that a; © (ai)~+ will cancel to become just one «;. Note that ajo (ay)+ _ (a) ° (a9)? Orso (am) ° (a)~ 2 ° (ag)~ 2 Oro (am) Ui, which we can rearrange as (21) GI1 9 (ag) D2-“G2 9 +0 (aq) Om Em, Next, if bj and by correspond to adjacent vertices J; and lj, then (1); — (Iv); is zero for all j except one for which (1;); — (lv); = £1. Thus the above can be rewritten as (x;)! or (xj)! for some specific j. Since x; = (x;)~+ this shows that (a;,)~! 0 aj, simplifies to x; for some j. This intuition is formalized in a proof in the following subsection. 9 # 4.3 Promise Cubical Hamiltonian Path solution → (Group) Rubik’s Square solution Lemma 4.1. The transformations bi all commute.
1706.06708#35
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
35
Table 4 shows the ensemble size of the other top ranked approaches, out of 655 teams, from the Youtube-8M kaggle chal- lenge. Besides showing the best performance at the competition, we also designed a smaller set of models that ensemble more effi- ciently than others. Indeed, we need much less models in our en- semble than the other top performing approaches. Full ranking can be found at: https://www.kaggle.com/c/youtube8m/leaderboard. 6 CONCLUSIONS We have addressed the problem of large-scale video tagging and explored trainable variants of classical pooling methods (BoW, VLAD, FV) for the temporal aggregation of audio and visual features. In this context we have observed NetVLAD, NetFV and BoW to outperform more common temporal models such as LSTM and GRU. We have also introduced the Context Gating mechanism and have shown its benefit for the trainable versions of BoW, VLAD and FV. The ensemble of our individual models has been shown to improve the performance further, enabling our method to win the Youtube 8M Large-Scale Video Understand- ing challenge. Our TensorFlow toolbox LOUPE is available for download from [57] and includes implementations of the Context Gating as well as learnable pooling modules used in this work.
1706.06905#35
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
35
that the goal is a single atom. Such domains can be shown indeed to have a small and bounded width w that does not depend on the instance size, which implies that they can be solved (optimally) by running IW(k). Moreover, IW(k) runs in time and space that are exponential in k and not in the number of problem variables. IW calls the procedures IW(1), IW(2), . . . sequentially until finding a solution. IW is complete but not effective in problem with multiple goal atoms. For this, Serialized IW (SIW) calls IW sequentially for achieving the goal atoms one at a time. While SIW is a blind search procedure that is incomplete (it can get trapped into dead- ends), it turns out to perform much better than a greedy best- first guided by the standard heuristics. Other variations of IW have been used for planning in the Atari games and those of the General Video-Game AI competition [22, 10, 29].
1706.06927#35
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
35
• DeepFM[10]. It imposes a factorization machines as "wide" module in Wide&Deep saving feature engineering jobs. 6.3 Metrics In CTR prediction field, AUC is a widely used metric[8]. It measures the goodness of order by ranking all the ads with predicted CTR, including intra-user and inter-user orders. An variation of user weighted AUC is introduced in [7, 13] which measures the goodness of intra-user order by averaging AUC over users and is shown to be more relevant to online performance in display advertising system. We adapt this metric in our experiments. For simplicity, we still refer it as AUC. It is calculated as follows: 7, 7, #impression; x AUC; AUC = Lis timpression; x AUC: (10) ap > DL, #impression; where n is the number of users, #impressioni and AUCi are the number of impressions and AUC corresponding to the i-th user. Besides, we follow [25] to introduce Relalmpr metric to measure relative improvement over models. For a random guesser, the value of AUC is 0.5. Hence Relalmpr is defined as below: AUC(measured model) — 0.5 AUC(measured model) — 0.5 1 100%. 11 AUC(base model) — 0.5 * (1) Relalmpr =
1706.06978#35
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
36
9 # 4.3 Promise Cubical Hamiltonian Path solution → (Group) Rubik’s Square solution Lemma 4.1. The transformations bi all commute. Proof: Consider any such transformation bi. The transformation bi can be rewritten as (ai)−1 ◦ yi ◦ ai. For any cubie not moved by the yi middle term, the effect of this transformation is the same as the effect of transformation (ai)−1 ◦ ai = 1. In other words, bi only affects cubies that are moved by the yi term. But yi only affects cubies with y coordinate i. In general in a Rubik’s Square, cubies with y coordinate i at some particular time will have y coordinate ±i at all times. Thus, all the cubies affected by bi start in rows ±i. This is enough to see that the cubies affected by ; are disjoint from those affected by bj (for j #1). In other words, the transformations }; all commute. Theorem 4.2. If l1, . . . , ln is a “yes” instance to the Promise Cubical Hamiltonian Path problem, then (t, k) is a “yes” instance to the Group Rubik’s Square problem.
1706.06708#36
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
36
ACKNOWLEDGMENTS The authors would like to thank Jean-Baptiste Alayrac and Relja Arandjelovi´c for valuable discussions as well as the Google team for providing the Youtube-8M Tensorflow Starter Code. This work has also been partly supported by ERC grants ACTIVIA (no. 307574) and LEAP (no. 336845), CIFAR Learning in Machines & Brains program, ESIF, OP Research, development and education IMPACT No. CZ.02.1.01/0.0/0.0/15 003/0000468 Project and a Google Research Award. 6
1706.06905#36
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
36
Width-based algorithms such as IW and SIW do not require PDDL-like planning models and can work directly with sim- ulators, and thus unlike heuristic search planning algorithms, can be easily adapted to work with Functional STRIPS with state constraints. The problem is that by themselves, [W and SIW are not powerful enough for solving large CTMP problems. For such problems it is necessary to complement the effective exploration that comes from width-based search with the guidance that results from goal-directed heuristics. For this reason, we appeal to a combination of heuristic and width-based search called Best-First Width Search (BFWS), that has been shown recently to yield state-of-the-art results over the classical planning benchmarks [20]. BFWS is a stan- dard best-first search with a sequence of evaluation functions f = (h,hi,...,Rn) where the node that is selected for expansion from the OPEN list at each iteration is the node that minimizes h, using the other h; functions lexicographically for breaking ties. In the best performing variants of BFWS, the main function h = w computes the “novelty” of the nodes, while the other functions h; take the goal into account.
1706.06927#36
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
36
AUC(measured model) — 0.5 1 100%. 11 AUC(base model) — 0.5 * (1) Relalmpr = Train Loss 0235 0.260 Test Loss Test AUC No Goods id 0.230 0.255 0.225 0.220 Goods id without Reg Goods id Dropout Goods id Filter Goods id with Reg in Difacto 059) Goods id with MBA Reg 0.250 Loss Loss 0215 No Goods id Goods id without Reg Goods id Dropout 0265 Goods id Filter Goods id with Reg in Difacto Goods id with MBA Reg ooo 0.240, 0210 0.205 AUC No Goods id Goods id without Reg Goods id Dropout Goods id Filter Goods id with Reg in Difacto Goods id with MBA Reg 05 T 15 2 0 05 Epoch 15 2 0 05 1 15 2 Epoch Epoch Figure 4: Performances of BaseModel with different regularizations on Alibaba Dataset. Training with fine-grained дoods_ids features without regularization encounters serious overfitting after the first epoch. All the regularizations show improvement, among which our proposed mini-batch aware regularization performs best. Besides, well trained model with дoods_ids features gets higher AUC than without them. It comes from the richer information that fine-grained features contained.
1706.06978#36
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
37
Proof: Suppose l1, . . . , ln is a “yes” instance to the Promise Cubical Hamiltonian Path problem. Let m be the length of li and note that ln = 00 . . . 0 by the promise of the Promise Cubical Hamiltonian Path problem. Furthermore, since l1, . . . , ln is a “yes” instance to the Promise Cubical Hamiltonian Path problem, there exists an ordering of these bitstrings li1, li2, . . . , lin such that each consecutive pair of bitstrings is at Hamming distance one, i1 = 1, and in = n (with the final two conditions coming from the promise). By Lemma 4.1, we know that t = a1 ◦ b1 ◦ b2 ◦ · · · ◦ bn can be rewritten as t = a1 ◦ bi1 ◦ bi2 ◦ · · · ◦ bin. Using the definition of bi, we can further rewrite this as t = a1 ◦ ((ai1)−1 ◦ yi1 ◦ ai1) ◦ ((ai2)−1 ◦ yi2 ◦ ai2) ◦ · · · ◦ ((ain)−1 ◦ yin ◦ ain), # or as
1706.06708#37
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
37
Groundtruth: Dish - Cuisine - Food - Sauce ani ‘ + fei Groundtruth: Barcebue - Grilling - Machine - Food - Wood - Cookin: Top 4 scores: Food (99.7%) - Cooking (78.4%) = Cuisine (69.9%) Top 6 scores: Food (97.5%) - Wood (74.9%) - Barbecue (60.0%) py ° - : - Dish (44.6%) - Cooking (50.1%) - Barbecue grill (27.9%) - Table (27.4%) fous Heit WWW.GEARBEST.C Groundtruth: Gadget - Iphone 3G - Mobile Phone - Smartphone Groundtruth: Festival - Musician - Parade - Marching Band - Computer Monitor - Telephone - University - Musical Ensemble - American Football - Stadium Top 6 scores: Mobile phone (99.9%) - Smartphone (99.7%) - Gadget (89.3%) Top 9 scores: Parade (99.7%) - Musical Ensemble (99.7%) - Marching Band - Telephone (49.0%) - Camera (5.2%) - Microsoft Lumia (3.3%) (98.9%) - Musician (65.9%) - Festival (59.7%) - University
1706.06905#37
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
37
For our compiled CTMP domain, we use BFWS with an evaluation function f = (w,h1,...,hn), where w stands for a standard novelty measure, and hy,..., h,, are simple heuristic counters defined for this particular domain. The novelty w is defined as in [20]; namely, the novelty w(s) of a newly generated state s in the BFWS guided by the function f = (w,hi,...,n) is ¢ iff there is a tuple (conjunction) of i atoms X =, and no tuple of smaller size, that is true in s but false in all the states s’ generated before s with the same function values hy(s’) = hi(s), ..., and hn(s’) = hn(s). According to this definition, for example, a new state s has novelty 1 if there is an atom X = that is true in s but false in all the states s’ generated before s where h;(s’) = h;(s) for all 7.
1706.06927#37
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
37
Table 3: Model Coparison on Amazon Dataset and Movie- Lens Dataset. All the lines calculate RelaImpr by comparing with BaseModel on each dataset respectively. Table 4: Best AUCs of BaseModel with different regular- izations on Alibaba Dataset corresponding to Fig.4. All the other lines calculate RelaImpr by comparing with first line. Model MovieLens. AUC RelaImpr 0.7263 LR 0.7300 BaseModel 0.7304 Wide&Deep 0.7321 PNN 0.7324 DeepFM DIN 0.7337 DIN with Dicea 0.7348 -1.61% 0.00% 0.17% 0.91% 1.04% 1.61% 2.09% Amazon(Electro). RelaImpr AUC 0.7742 0.8624 0.8637 0.8679 0.8683 0.8818 0.8871 -24.34% 0.00% 0.36% 1.52% 1.63% 5.35% 6.82%
1706.06978#37
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
38
# or as t = (a1 ◦ (ai1)−1) ◦ yi1 ◦ (ai1 ◦ (ai2)−1) ◦ yi2 ◦ (ai2 ◦ (ai3)−1) ◦ · · · ◦ (ain−1 ◦ (ain)−1) ◦ yin ◦ (ain). We know that i1 = 1, and therefore that a1 ◦ (ai1)−1 = a1 ◦ (a1)−1 = 1 is the identity element. Similarly, we know that in = n and therefore that ain = an = (x1)(ln)1 ◦ (x2)(ln)2 ◦ · · · ◦ (xm)(ln)m = (x1)0 ◦ (x2)0 ◦ · · · ◦ (xm)0 = 1 is also the identity. Thus we see that t = yi1 ◦ (ai1 ◦ (ai2)−1) ◦ yi2 ◦ (ai2 ◦ (ai3)−1) ◦ · · · ◦ (ain−1 ◦ (ain)−1) ◦ yin. Consider the transformation aip ◦ (aip+1)−1. This transformation can be written as
1706.06708#38
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
38
- Camera (5.2%) - Microsoft Lumia (3.3%) (98.9%) - Musician (65.9%) - Festival (59.7%) - University (9.1%) - School (9.1%) - Military Band (8.8%) - Stadium (8.7%) Groundtruth: Paper - Food - Art Groundtruth: Concert - Musician - Musical Ensemble - Drummer - Orchestra Top 4 scores: Paper (61.6%) - Art (51.9%) - Hat (13.5%) - Paint (10.2%) Top 5 scores: Concert (99.6%) - Musician (99.6%) - Musical Ensemble (92.8%) - Orchestra (89.0%) - Drummer (80.0%) Groundtruth: Radio-controlled aircraft - Vehicle - North America P-51 Mustang Groundtruth: Skateboard - Skateboarding - Outdoor recreation - Airplane - Model Aircraft - Landing - Radio-controlled model Top 3 scores: Skateboarding (100%) - Skateboard (98.2%) - Outdoor recreation Top 7 scores: Vehicle (100%) - Airplane (99.9%) - Radio-controlled model (97.0%) (99.6%) - Model aircraft (98.8%) - Aircraft
1706.06905#38
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
38
For the tie-breaking functions hi we consider three counters. The first is the standard goal counter #g where #g(s) stands for the number of goal atoms that are not true in s. The second is an slightly richer goal counter hM that takes into account that each object that has to be moved to a goal destination has to involve two actions at least: one for picking up the object, and one for placing the object. Thus hM (s) stands for twice the number of objects that are not in their goal configurations in s, minus 1 in case that one such object is being held.
1706.06927#38
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
38
Regularization Without goods_ids feature and Reg. With goods_ids feature without Reg. With goods_ids feature and Dropout Reg. With goods_ids feature and Filter Reg. With goods_ids feature and Difacto Reg. With goods_ids feature and MBA. Reg. AUC RelaImpr 0.00% 0.5940 2.02% 0.5959 3.19% 0.5970 4.57% 0.5983 0.5954 1.49% 0.6031 9.68% # a Other lines except LR use PReLU as activation function. # 6.4 Result from model comparison on Amazon Dataset and MovieLens Dataset
1706.06978#38
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
39
Consider the transformation aip ◦ (aip+1)−1. This transformation can be written as aip ◦ (aip+1)−1 = (x1)(lip )1 ◦ (x2)(lip )2 ◦ · · · ◦ (xm)(lip )m ◦ (x1)−(lip+1 )1 ◦ (x2)−(lip+1 )2 ◦ · · · ◦ (xm)−(lip+1 )m. Because xu always commutes with xv, we can rewrite this as aip ◦ (aip+1)−1 = (x1)(lip )1−(lip+1 )1 ◦ (x2)(lip )2−(lip+1 )2 ◦ · · · ◦ (xm)(lip )m−(lip+1 )m. Since lip differs from lip+1 in only one position, call it jp, we see that (lip)j −(lip+1)j is zero unless j = jp, and is ±1 in that final case. This is sufficient to show that aip ◦ (aip+1)−1 = (xjp)±1 = xjp. 10 Thus we see that
1706.06708#39
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
39
Vehicle (100%) - Airplane (99.9%) - Radio-controlled model (97.0%) (99.6%) - Model aircraft (98.8%) - Aircraft (98.7%) - Radio-controlled aircraft (94.0%) - Motorsport (55.3%) Groundtruth: Tree - Christmas Tree - Christmas Decoration - Christmas - Four Wheel Drive . Top 6 scores: Christmas (87.7%) - Christmas decoration (40.1%) - Origami Top 6 scores: Car (100%) - Vehicle (100%) - Sport Utility Vehicle (97.0%) (23.0%) - Paper (15.2%) - Tree (13.9%) - Christmas Tree (7.4%) - Dacia Duster (30.1%) - Fiat Automobiles (30.0%) - Volkswagen Beetles (12.0%) Groundtruth: Car - Vehicle - Sport Utility Vehicle - Dacia Duster - Renault
1706.06905#39
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
39
The last tie-breaker used corresponds to the counter #c(s) that tracks the number of objects that are in “obstructing configurations” in the state s. This measure is determined from a set C of object configurations C computed once from the initial problem state, as it is common in landmark heuristics. The count #c(s) is i if there are i objects o for which the state variable Conf (o) has a value in s that is in C. The intuition is that a configuration is “obstructing” if it’s on the way of an arm trajectory that follows a suitable relaxed plan for achieving a goal atom. More precisely, we use a single IW(2) call at preprocessing for computing a plan for each goal atom in a problem relaxation that ignores state constraints (i.e., collisions). These relaxed problems are “easy” as they just involve robot motions to pick up the goal object followed by a pick up action, more robot motions, and a place action. The search tree constructed by IW(2) normally includes a plan for each goal atom in this relaxation, and often more than one plan. One such
1706.06927#39
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
39
# a Other lines except LR use PReLU as activation function. # 6.4 Result from model comparison on Amazon Dataset and MovieLens Dataset Table 3 shows the results on Amazon dataset and MovieLens dataset. All experiments are repeated 5 times and averaged results are re- ported. The influence of random initialization on AUC is less than 0.0002. Obviously, all the deep networks beat LR model significantly, which indeed demonstrates the power of deep learning. PNN and DeepFM with specially designed structures preform better than Wide&Deep. DIN performs best among all the competitors. Espe- cially on Amazon Dataset with rich user behaviors, DIN stands out significantly. We owe this to the design of local activation unit struc- ture in DIN. DIN pays attentions to the locally related user interests by soft-searching for parts of user behaviors that are relevant to can- didate ad. With this mechanism, DIN obtains an adaptively varying representation of user interests, greatly improving the expressive ability of model compared with other deep networks. Besides, DIN with Dice brings further improvement over DIN, which verifies the effectiveness of the proposed data adaptive activation function Dice.
1706.06978#39
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
40
10 Thus we see that t = yi1 ◦ xj1 ◦ yi2 ◦ xj2 ◦ · · · ◦ xjn−1 ◦ yin, or (by left multiplying) that 1 = y−1 in ◦ x−1 jn−1 ◦ · · · ◦ x−1 j2 ◦ y−1 i2 ◦ x−1 j1 ◦ y−1 i1 ◦ t = yin ◦ xjn−1 ◦ · · · ◦ xj2 ◦ yi2 ◦ xj1 ◦ yi1 ◦ t. We see that t can be reversed by k = 2n — 1 moves of the form x; or y;, or in other words that (t,k) is a “yes” instance to the Group Rubik’s Square problem. Corollary 4.3. If l1, . . . , ln is a “yes” instance to the Promise Cubical Hamiltonian Path problem, then (Ct, k) is a “yes” instance to the Rubik’s Square problem. Proof: This follows immediately from Theorem 4.2 and Lemma 2.1. # 4.4 Coloring of Ct
1706.06708#40
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
40
Fig. 5: Qualitative results from our best single model (Gated NetVLAD). We show both groundtruth labels (in green) from the Youtube 8M dataset and the top predictions of the Gated NetVLAD model. 7 # REFERENCES [1] K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” in CVPR, 2016. [2] A. Krizhevsky, I. Sutskever, and G. E. Hinton, “Imagenet classification with deep convolutional neural networks,” in NIPS, 2012. [3] K. Simonyan and A. Zisserman, “Very deep convolutional networks for large-scale image recognition,” in ICLR, 2015. [4] C. Szegedy, S. Ioffe, and V. Vanhoucke, “Inception-v4, inception- connections on learning,” resnet and the arXiv:1602.07261v1, 2016. impact of residual
1706.06905#40
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
40
and a place action. The search tree constructed by IW(2) normally includes a plan for each goal atom in this relaxation, and often more than one plan. One such relaxed plan “collides” with an object o if a M oveArm(t) action in the plan leads to a state where a state constraint @nonoverlap(Base,Arm,Conf(o),Hold) is violated (this is possible because of the relaxation). In the presence of multiple plans for an atomic goal in the relaxation, a plan is selected that collides with a minimum number of objects. For such an atomic goal, the “obstructing configurations” are the real object configurations C such that a state constraint @nonoverlap(Base,Arm,C,Hold) is violated in some state of the relaxed plan where Conf (o) = C for some object o. We further consider as obstructing those configurations that in a similar manner obstruct the achievement of the goal of holding any object o that is in an obstructing configuration in the initial state, recursively and up to a fixpoint. The set C is then the union of the sets of “obstructing
1706.06927#40
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
40
6.5 Performance of regularization As the dimension of features in both Amazon Dataset and Movie- Lens Dataset is not high (about 0.1 million), all the deep models including our proposed DIN do not meet grave problem of overfit- ting. However, when it comes to the Alibaba dataset from the online advertising system which contains higher dimensional sparse fea- tures, overfitting turns to be a big challenge. For example, when training deep models with fine-grained features (e.g., features of дoods_ids with dimension of 0.6 billion in Table 1), serious overfit- ting occurs after the first epoch without any regularization, which causes the model performance to drop rapidly, as the dark green line shown in Fig.4. For this reason, we conduct careful experiments to check the performance of several commonly used regularizations. • Dropout[22]. Randomly discard 50% of feature ids in each sample. • Filter. Filter visited дoods_id by occurrence frequency in samples and leave only the most frequent ones. In our setting, top 20 million дoods_ids are left. • Regularization in DiFacto[16]. Parameters associated with frequent features are less over-regularized.
1706.06978#40
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
41
Proof: This follows immediately from Theorem 4.2 and Lemma 2.1. # 4.4 Coloring of Ct In order to show the other direction of the proof, it will be helpful to consider the coloring of the stickers on the top and bottom faces of the Rubik’s Square. In particular, if we define b = b1 ◦· · ·◦bn (so that t = a1 ◦ b), then it will be very helpful for us to know the colors of the top and bottom stickers in configuration Cb = b(C0). Consider for example the instance of Promise Cubical Hamiltonian Path with n = 5 and m = 3 defined below: l1 = 011 l2 = 110 l3 = 111 l4 = 100 l5 = 000 For this example, C0 is an s × s Rubik’s Square with s = 2(max(m, n) + 2n) = 30.
1706.06708#41
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
41
[5] D. Tran, L. Bourdev, R. Fergus, L. Torresani, and M. Paluri, “Learning spatiotemporal features with 3d convolutional networks,” in ICCV, 2015. [6] C. Feichtenhofer, A. Pinz, and A. Zisserman, “Convolutional two-stream C. Feichtenhofer, A. Pinz, and A. Zisserman, “Convolutional two-stream network fusion for video action recognition,” in CVPR, 2016. network fusion for video action recognition,” in CVPR, 2016. I. Laptev, M. Marszalek, C. Schmid, and B. Rozenfeld, “Learning realistic human actions from movies,” in CVPR, 2008. [8] C. Sch¨uldt, I. Laptev, and B. Caputo, “Recognizing human actions: a local svm approach,” in ICPR, 2004. [9] H. Wang and C. Schmid, “Action Recognition with Improved Trajecto- ries,” in ICCV, 2013.
1706.06905#41
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
41
configuration in the initial state, recursively and up to a fixpoint. The set C is then the union of the sets of “obstructing configurations” for each atomic goal, and #c(s) is the number of objects o for which the value C of the state variable Conf (o) in s belongs to C. Note that unlike the other two heuristics #g and hM , which must have value zero in the goal, the #c(s) counter may be different than zero in the goal. Indeed, if a problem involves exchanging the configuration of two objects, #c(s) will be equal to 2 in the goal, as the two goal configurations are actually obstructing configurations as determined from the initial state. The set C of obstructing configurations is computed once from the initial state in low polynomial time by calling the IW(2) procedure once. The resulting #c(s) count provides an heuristic estimate of the number of objects that need to be removed in order to achieve the goal, a version of the minimum constraint removal problem [12] mentioned in [7].
1706.06927#41
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
41
• Regularization in DiFacto[16]. Parameters associated with frequent features are less over-regularized. • MBA. Our proposed Mini-Batch Aware regularization method (Eq.4). Regularization parameter λ for both DiFacto and MBA is searched and set to be 0.01. Fig.4 and Table 4 give the comparison results. Focusing on the detail of Fig.4, model trained with fine-grained дoods_ids features 7 brings large improvement on the test AUC performance in the first epoch, compared without it. However, overfitting occurs rapidly in the case of training without regularization (dark green line). Dropout prevents quick overfitting but causes slower convergence. Frequency filter relieves overfitting to a degree. Regularization in DiFacto sets a greater penalty on дoods_id with high frequency, which performs worse than frequency filter. Our proposed mini- batch aware(MBA) regularization performs best compared with all the other methods, which prevents overfitting significantly. Besides, well trained models with дoods_ids features show better AUC performance than without them. This is duo to the richer information that fine-grained features contained. Considering this, although frequency filter performs slightly better than dropout, it throws away most of low frequent ids and may lose room for models to make better use of fine-grained features.
1706.06978#41
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
42
For this example, C0 is an s × s Rubik’s Square with s = 2(max(m, n) + 2n) = 30. To describe configuration Cb, we need to know the effect of transformation bi. For example, Figure 4 shows the top face of a Rubik’s Square in configurations C0, a2(C0), (y2 ◦ a2)(C0), and b2(C0) = ((a2)−1 ◦ y2 ◦ a2)(C0) where a2 and y2 are defined in terms of l2 = 110 as in the reduction. Lemma 4.4. Suppose i ∈ {1, . . . , n}, and c, r ∈ {1, . . . , s/2}. Then 1. if r = i and c ≤ m such that bit c of li is 1, then bi swaps the cubies in positions (c, −r) and (−c, r) without flipping either; 2. if r = i and either c > m or c ≤ m and bit c of li is 0, then bi swaps the cubies in positions (c, r) and (−c, r) and flips them both;
1706.06708#42
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
42
[9] H. Wang and C. Schmid, “Action Recognition with Improved Trajecto- ries,” in ICCV, 2013. [10] M. Baccouche, F. Mamalet, C. Wolf, C. Garcia, and A. Baskurt, “Se- quential deep learning for human action recognition,” Human Behavior Understanding, pp. 29–39, 2011. [11] J. Carreira and A. Zisserman, “Quo vadis, action recognition? a new model and the kinetics dataset,” in CVPR, 2017. [12] C. Feichtenhofer, A. Pinz, and R. P. Wildes, “Spatiotemporal multiplier networks for video action recognition,” in CVPR, 2017. [13] S. Ji, W. Xu, M. Yang, and K. Yu, “3D Convolutional Neural Networks for Human Action Recognition,” in PAMI, 2013. [14] G. Varol, I. Laptev, and C. Schmid, “Long-term Temporal Convolutions for Action Recognition,” PAMI, 2017.
1706.06905#42
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
42
The counters hM and #c used in the BFWS algorithm for CTMP planning can be justified on domain-independent grounds. Indeed, hM corresponds roughly to the cost of a problem where both state constraints and preconditions involving procedures have been relaxed. So the plans for the relaxation are sequences of pickup and place actions involving the goal objects only. The counter #c is related to landmark heuristics under the assumption that the goals will be achieved
1706.06927#42
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
42
# 6.6 Result from model comparison on Alibaba Dataset Table 5 shows the experimental results on Alibaba dataset with full feature sets as shown in Table 1. As expected, LR is proven to be much weaker than deep models. Making comparisons among deep models, we report several conclusions. First, under the same activa- tion function and regularization, DIN itself has achieved superior performance compared with all the other deep networks including BaseModel, Wide&Deep, PNN and DeepFM. DIN achieves 0.0059 absolute AUC gain and 6.08% RelaImpr over BaseModel. It validates again the useful design of local activation unit structure. Second, ablation study based on DIN demonstrates the effectiveness of our proposed training techniques. Training DIN with mini-batch aware regularizer brings additional 0.0031 absolute AUC gain over dropout. Besides, DIN with Dice brings additional 0.0015 absolute AUC gain over PReLU.
1706.06978#42
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
43
3. all other cubies are not moved by bi. 11 ISM2--0-9 7 HS 41123 4567 8 TUBS IS-M12-1-0-9 7H S49 Q11 234567 8 TUBS (a) (b) ISM2--0-9 7 # HS 41123 4567 8 # TUBS IS-M4-13-12-11-10-9 -8 7-6-5 43 (c) (d) Figure 4: Applying b2 to C0 step by step (only top face shown). Proof: As noted in the proof of Lemma 4.1, a cubie is affected by bi = (ai)−1 ◦ yi ◦ ai if and only if it is moved by the yi term. Note also that (ai)−1 = ai only moves cubies within their columns and only for columns c for which bit c of li is 1. One consequence is that a cubie can only be moved by ai if its column index is positive. Any cubie moved by the yi term will have a column index of different signs before and after the yi move, so as a consequence such a cubie cannot be moved by both ai and (ai)−1.
1706.06708#43
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
43
[15] H. Jegou, M. Douze, C. Schmid, and P. Perez, “Aggregating local descriptors into a compact image representation,” in CVPR, 2010. [16] S. Hochreiter and J. Schmidhuber, “Long short-term memory.” in Neural Computing, 1997. [17] K. Cho, B. van Merrienboer, D. Bahdanau, and Y. Bengio, “On the Prop- erties of Neural Machine Translation: Encoder-Decoder Approaches,” arXiv preprint arXiv:1409.1259, 2014. [18] J. Donahue, L. A. Hendricks, S. Guadarrama, M. Rohrbach, S. Venu- gopalan, K. Saenko, and T. Darrell, “Long-term recurrent convolu- tional networks for visual recognition and description,” arXiv preprint arXiv:1411.4389, 2014.
1706.06905#43
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
43
Fig. 2: Manipulating objects in a 3-table environment, initial (left) and goal (right) situations. The objective is to put the blue objects on the rightmost table and the red objects on the leftmost table. through certain motion plans. The third element in our BFWS algorithm is the extension of the problem states with two extra Boolean features gras- pable*(o) and placeable*(o) associated with each object o. The features graspable*(o) and placeable*(o) are set to true in a state s iff the preconditions of the actions Grasp(o) and Place(o) are true in s respectively. These features are needed as there are no state variables related to the preconditions (@graspable B A Conf(o)) and (@placeable B A) of those actions, as the predicate symbols of these atoms denote pro- cedures. That is, the terms B, A, and Conf (o) in these atoms denote state variables but the relations themselves, denoted by the symbols @graspable and @placeable, are static.
1706.06927#43
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
43
Taken together, DIN with MBA regularization and Dice achieves total 11.65% RelaImpr and 0.0113 absolute AUC gain over Base- Model. Even compared with competitor DeepFM which performs best on this dataset, DIN still achieves 0.009 absolute AUC gain. It is notable that in commercial advertising systems with hundreds of millions of traffics, 0.001 absolute AUC gain is significant and worthy of model deployment empirically. DIN shows great superi- ority to better understand and make use of the characteristics of user behavior data. Besides, the two proposed techniques further improve model performance and provide powerful help for training large scale industrial deep networks. 6.7 Result from online A/B testing Careful online A/B testing in the display advertising system in Alibaba was conducted from 2017-05 to 2017-06. During almost a month’s testing, DIN trained with the proposed regularizer and acti- vation function contributes up to 10.0% CTR and 3.8% RPM(Revenue Per Mille) promotion4 compared with the introduced BaseModel, the last version of our online-serving model. This is a significant improvement and demonstrates the effectiveness of our proposed approaches. Now DIN has been deployed online and serves the main traffic.
1706.06978#43
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
44
Thus there are three possibilities for cubies that are moved by bi: (1) the cubie is moved only by yi, (2) the cubie is moved by ai and then by yi, and (3) the cubie is moved by yi and then by 12 (ai)−1. Consider any cubie of type (1) whose coordinates have absolute values c and r. Since the cubie is moved by yi, we know that r = i. Since it is not moved by either ai or (ai)−1, we know that the cubie’s column index both before and after the move is not one of the column indices affected by ai. But these two column indices are c and −c (in some order). Therefore it must not be the case that bit c of li is 1. Also note that cubies of this type are flipped exactly once. Putting that together, we see that if c ∈ {1, . . . , s/2}, r = i, and it is not the case that bit c of li exists and is 1, then bi swaps the cubies in positions (c, r) and (−c, r) and flips them both.
1706.06708#44
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
44
[19] S. Abu-El-Haija, N. Kothari, J. Lee, P. Natsev, G. Toderici, B. Varadara- jan, and S. Vijayanarasimhan, “Youtube-8m: A large-scale video classi- fication benchmark,” arXiv preprint arXiv:1609.08675, 2016. [20] G. Csurka, C. Dance, L. Fan, J. Willamowski, and C. Bray, “Visual categorization with bags of keypoints,” in ECCV Workshop, 2004. [21] J. Sivic and A. Zisserman, “Video google: A text retrieval approach to object matching in videos,” in ICCV, 2003. [22] F. Perronnin and C. Dance, “Fisher kernels on visual vocabularies for image categorization,” in CVPR, 2007. [23] R. Arandjelovic, P. Gronat, A. Torii, T. Pajdla, and J. Sivic, “NetVLAD: CNN architecture for weakly supervised place recognition,” in CVPR, 2016.
1706.06905#44
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
44
Finally, for the experimental results we have found useful to add an extra precondition to the action M oveArm(t). This precondition requires that @target-a(t) is the resting configuration ca0 or that @placeable(Base,@target-a(t)) is the arm is moved from the resting true. In other words, position to configurations where an object could be picked up or placed. This restriction reduces the average branching factor of the planning problem, in particular when the number of arm motions in the arm graph is large. # VI. EXPERIMENTAL EVALUATION
1706.06927#44
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
44
4In our real advertising system, ads are ranked by CTRα · bid-price with α > 1.0, which controls the balance of promotion of CTR and RPM. 8 Table 5: Model Comparison on Alibaba Dataset with full feature sets. All the lines calculate RelaImpr by comparing with BaseModel. DIN significantly outperforms all the other competitors. Besides, training DIN with our proposed mini- batch aware regularizer and Dice activation function brings further improvements. Model AUC RelaImpr LR BaseModela,b Wide&Deepa,b PNNa,b DeepFMa,b DIN Modela,b DIN with MBA Reg.a DIN with Dice b DIN with MBA Reg. and Dice 0.5738 0.5970 0.5977 0.5983 0.5993 0.6029 0.6060 0.6044 0.6083 - 23.92% 0.00% 0.72% 1.34% 2.37% 6.08% 9.28% 7.63% 11.65% a These lines are trained with PReLU as the activation function. b These lines are trained with dropout regularization.
1706.06978#44
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
45
Consider any cubie of type (2) whose coordinates have absolute values c and r. Since the cubie is first moved by ai and then by yi, we know that r = i and that c ≤ m with bit c of li equal to 1. Furthermore, the cubie must have started in position (c, −r), then moved to position (c, r) by ai, and then moved to position (−c, r) by yi. Since this cubie is flipped twice, it is overall not flipped. Consider on the other hand any cubie of type (3) whose coordinates have absolute values c and r. Since the cubie is first moved by yi and then by (ai)−1 = ai, we know that r = i and that c ≤ m with bit c of li equal to 1. Furthermore, the cubie must have started in position (−c, r), then moved to position (c, r) by yi, and then moved to position (c, −r) by ai. Since this cubie is flipped twice, it is overall not flipped. Putting that together, we see that if r = i, and bit c of li is 1, then bi swaps the cubies in positions (c, −r) and (−c, r) without flipping either.
1706.06708#45
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
45
[24] C. R. de Souza, A. Gaidon, E. Vig, and A. M. L´opez, “Sympathy for the details: Dense trajectories and hybrid classification architectures for action recognition,” in ECCV, 2016. [25] A. Karpathy, G. Toderici, S. Shetty, T. Leung, R. Sukthankar, and L. Fei-Fei, “Large-scale video classification with convolutional neural networks,” in CVPR, 2014, pp. 1725–1732. [26] R. Girdhar, D. Ramanan, A. Gupta, J. Sivic, and B. Russell, “Actionvlad: Learning spatio-temporal aggregation for action classification,” in CVPR, 2017. [27] L. Wang, Y. Qiao, and X. Tang, “Action recognition with trajectory- pooled deep-convolutional descriptors,” in CVPR, 2015, pp. 4305–4314. [28] K. Simonyan and A. Zisserman, “Two-stream convolutional networks for action recognition in videos,” in ICLR, 2014, pp. 568–576.
1706.06905#45
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
45
# VI. EXPERIMENTAL EVALUATION We test our model on two environments having one and three tables, the characteristics of which are shown in Table [I] As explained above, the virtual space of the robot is discretized into D = 15 position pairs or virtual configurations, with k = 4 grasping poses per virtual configuration and k’ = 4 arm trajectories for each of those grasping poses, obtained from Moveit. Thus, the maximum number of (virtual) grasping poses will be D x k = 60, of which those for which no motion plan is found get pruned. In our benchmark environments, the total number of virtual grasping poses is 42. In turn, the maximum number of arm trajectories is D x k x k’ = 240 in each direction, i.e. 480, while in both of our environments we have a total of 268 such trajectories, since again no feasible motion plans are found for the rest. The number of
1706.06927#45
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
45
a These lines are trained with PReLU as the activation function. b These lines are trained with dropout regularization. It is worth mentioning that online serving of industrial deep networks is not an easy job with hundreds of millions of users vis- iting our system everyday. Even worse, at traffic peak our system serves more than 1 million users per second. It is required to make realtime CTR predictions with high throughput and low latency. For example, in our real system we need to predict hundreds of ads for each visitor in less than 10 milliseconds. In our practice, several important techniques are deployed for accelerating online serving of industrial deep networks under the CPU-GPU archi- tecture: i) request batching which merges adjacent requests from CPU to take advantage of GPU power, ii) GPU memory optimization which improves the access pattern to reduce wasted transactions in GPU memory, iii) concurrent kernel computation which allows execution of matrix computations to be processed with multiple CUDA kernels concurrently. In all, optimization of these techniques doubles the QPS (Query Per Second) capacity of a single machine practically. Online serving of DIN also benefits from this.
1706.06978#45
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
46
This covers the three types of cubies that are moved by };. All other cubies remain in place. We can apply the above to figure out the effect of transformation b1 ◦b2 ◦· · ·◦bn on configuration C0. In particular, that allows us to learn the coloring of configuration Cb. Theorem 4.5. In Cb, a cubie has top face blue if and only if it is in position (c, r) such that 1 ≤ r ≤ n and either |c| > m or |c| ≤ m and bit |c| of lr is 0.
1706.06708#46
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
46
action recognition in videos,” in ICLR, 2014, pp. 568–576. [29] F. Basura, E. Gavves, J. M. Oramas, A. Ghodrati, and T. Tuytelaars, “Modeling video evolution for action recognition,” in CVPR, 2015. [30] M. Ibrahim, S. Muralidharan, Z. Deng, A. Vahdat, and M. Greg, “A Hierarchical Deep Temporal Model for Group Activity Recognition,” in CVPR, 2016. [31] G. Lev, G. Sadeh, B. Klein, and L. Wolf, “Rnn fisher vectors for action recognition and image annotation,” in ECCV, 2016. [32] J. Yue-Hei Ng, M. Hausknecht, S. Vijayanarasimhan, O. Vinyals, R. Monga, and G. Toderici, “Beyond short snippets: Deep networks for video classification,” in CVPR, 2015.
1706.06905#46
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
46
sampled bases is 124 for the one-table environment and 323 for the three-table environment, while each robot base in the base graph is connected to a maximum of 12 closest base configurations. Importantly, the output of the precompilation phase, which takes 5 min. (13 min.) for the one-table (three- tables) environment, is valid for for all instances with that number of tables, regardless of number of objects, initial robot and object configurations, and particular goals of the problem.
1706.06927#46
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
46
6.8 Visualization of DIN Finally we conduct case study to reveal the inner structure of DIN on Alibaba dataset. We first examine the effectiveness of local activa- tion unit. Fig.5 illustrates the activation intensity of user behaviors with respect to a candidate ad. As expected, behaviors with high relevance to candidate ad are weighted high. We then visualize the learned embedding vectors. Taking the young mother mentioned before as example, we randomly select 9 categories (dress, sport shoes, bags, etc) and 100 goods of each category as the candidate ads for her. Fig.6 shows the visualization of embedding vectors of goods with t-SNE[17] learned by DIN, in which points with same shape correspond to the same category. We can see that goods with same category almost belong to one cluster, which shows the clustering property of DIN embeddings clearly. Besides, we color the points that represent candidate ads by the prediction value. Fig.6 is also a heat map of this mother’s interest density distribution for potential candidates in embedding space. It shows DIN can form a multimodal interest density distribution in Past Figure 5: Illustration of adaptive activation in DIN. Behav- iors with high relevance to candidate ad get high activation weight.
1706.06978#46
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
47
Proof: Cy is obtained from Cp by applying transformation b; 0 bz o--+o by. A cubie has top face blue in C) if and only if transformation b; 0 bj 0 --- 0 by flips that cubie an odd number of times. Each 0; affects a disjoint set of cubies. Thus, among the cubies affected by some particular b;, the only ones that end up blue face up are the ones that are flipped by bj. By Lemma|[4.4] these are the cubies in row i with column c such that it is not the case that bit |c| of J; is 1. Tallying up those cubies over all the bjs yields exactly the set of blue-face-up cubies given in the theorem statement. This concludes the description of Cb in terms of colors. The coloring of configuration Ct—the configuration that is actually obtained by applying the reduction to l1, . . . , ln—can be obtained from the coloring of configuration Cb by applying transformation a1.
1706.06708#47
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
47
[33] L. Wang, Y. Xiong, Y. Qiao, D. Lin, X. Tang, and L. Van Gool, “Temporal segment networks: Towards good practices for deep action recognition,” in ECCV, 2016. [34] X. Peng, L. Wang, Y. Qiao, and Q. Peng, “Boosting VLAD with Supervised Dictionary Learning and High-Order Statistics,” in ECCV, 2014. [35] Z. Xu, Y. Yang, and A. G. Hauptmann, “A Discriminative CNN Video Representation for Event Detection,” in CVPR, 2015. [36] F. Perronnin and D. Larlus, “Fisher Vectors Meet Neural Networks: A Hybrid Classification Architecture,” in CVPR, 2015. [37] X. Peng, C. Zou, Y. Qiao, and Q. Peng, “Action recognition with stacked fisher vectors,” in ECCV, 2014. [38] V. Sydorov, M. Sakurada, and C. H. Lampert, “Deep fisher kernels and end to end learning of the Fisher kernel GMM parameters,” in CVPR, 2014.
1706.06905#47
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
47
For each environment, we generate a number of semi- random instances with increasing number of objects, ranging from 10 to 40, and increasing number of goals, ranging from 2 to 8, where a problem with e.g. 4 goals might require that 4 different objects be placed in their respective, given target configurations. The initial and goal states of a sample problem instance are shown in Fig. 2, where the robot needs to place all blue objects in one table and all red objects in another. Tables IIa and IIb show the results of our BWFS planner on each generated instance, running with a maximum of 30 minutes and 8GB of memory on an AMD Opteron [email protected]. The planner uses ROS [27], Gazebo [15], and MoveIt [32], in the preprocessing and in the simulations, but not at planning time. Videos showing the execution of the computed plans in the Gazebo simulator, for some selected instances, can be found in bit.ly/2fnXeAd. The results show that our approach is competitive and scales well with the number of objects in the table. The length of the obtained plans ranges from 22 to 220 steps. Problems with up to 20 objects, both for one and three tables, for example, are
1706.06927#47
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
47
Figure 6: Visualization of embeddings of goods in DIN. Shape of points represents category of goods. Color of points corresponds to CTR prediction value. candidates’ embedding space for a certain user to capture his/her diverse interests. 7 CONCLUSIONS In this paper, we focus on the task of CTR prediction modeling in the scenario of display advertising in e-commerce industry with rich user behavior data. The use of fixed-length representation in traditional deep CTR models is a bottleneck for capturing the diversity of user interests. To improve the expressive ability of model, a novel approach named DIN is designed to activate related user behaviors and obtain an adaptive representation vector for user interests which varies over different ads. Besides two novel techniques are introduced to help training industrial deep networks and further improve the performance of DIN. They can be easily generalized to other industrial deep learning tasks. DIN now has been deployed in the online display advertising system in Alibaba. REFERENCES [1] Dzmitry Bahdanau, Kyunghyun Cho, and Yoshua Bengio. 2015. Neural Machine Translation by Jointly Learning to Align and Translate. In Proceedings of the 3rd International Conference on Learning Representations. [2] Ducharme Réjean Bengio Yoshua et al. 2003. A neural probabilistic language model. Journal of Machine Learning Research (2003), 1137–1155.
1706.06978#47
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
48
Applying Theorem 4.5 to the previously given example, we obtain the coloring of the Rubik’s Square in configuration Cb as shown in Figure 5a. Note that the n × m grid of bits comprising l1, . . . , ln is actually directly encoded in the coloring of a section of the Rubik’s Square. In addition, the coloring of the Rubik’s Square in configuration Ct is shown for the same example in Figure 5b. 4.5 (Group) Rubik’s Square solution → Promise Cubical Hamiltonian Path solution Below, we prove the following theorem: Theorem 4.6. If (Ct, k) is a “yes” instance to the Rubik’s Square problem, then l1, . . . , ln is a “yes” instance to the Promise Cubical Hamiltonian Path problem. 13 ISM2--0-9 7 HS 41123 4567 8 TUBS # IS-M12-1-0-9 7H S49 Q11 234567 8 TUBS
1706.06708#48
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
48
[39] Y. N. Dauphin, F. Angela, M. Auli, and D. Grangier, “Language modeling with gated convolutional networks,” in arXiv preprint arXiv:1612.08083, 2016. [40] A. Miech, I. Laptev, and J. Sivic, “Learnable pooling with context gating for video classification,” arXiv preprint arXiv:1706.06905, 2017. [41] J. Hu, L. Shen, and G. Sun, “Squeeze-and-excitation networks,” arXiv preprint arXiv:1709.01507, 2017. [42] M. I. Jordan, “Hierarchical mixtures of experts and the em algorithm,” Neural Computation, 1994. [43] J. Philbin, O. Chum, M. Isard, J. Sivic, and A. Zisserman, “Lost in quantization: Improving particular object retrieval in large scale image databases,” in CVPR, 2008. [44] N. Passalis and A. Tefas, “Learning neural bag-of-features for large scale image retrieval,” IEEE Trans. Cybernetics, 2017.
1706.06905#48
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
48
in the table. The length of the obtained plans ranges from 22 to 220 steps. Problems with up to 20 objects, both for one and three tables, for example, are solved in a few seconds and requiring only the expansion of a few thousands of nodes in the search tree. Problems with a up to 30 and even 40 objects are solved with relative ease in the environment with three tables, but as expected become much harder when we have one single table, because the objects clutter almost all available space, making it harder for the arm robot to move collision-free. Indeed, the results show that the key parameter for scalability is #c, which in a sense indicates how cluttered the space is in the initial situation. When this number is not too high, as in the three-table environment, our approach scales
1706.06927#48
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
48
[3] Paul Covington, Jay Adams, and Emre Sargin. 2016. Deep neural networks for youtube recommendations. In Proceedings of the 10th ACM Conference on Recommender Systems. ACM, 191–198. 9 [4] Cheng H. et al. 2016. Wide & deep learning for recommender systems. In Pro- ceedings of the 1st Workshop on Deep Learning for Recommender Systems. ACM. [5] Qu Y. et al. 2016. Product-Based Neural Networks for User Response Prediction. In Proceedings of the 16th International Conference on Data Mining. [6] Wang H. et al. 2018. DKN: Deep Knowledge-Aware Network for News Recom- mendation. In Proceedings of 26th International World Wide Web Conference. [7] Zhu H. et al. 2017. Optimized Cost per Click in Taobao Display Advertising. In Proceedings of the 23rd International Conference on Knowledge Discovery and Data Mining. ACM, 2191–2200. [8] Tom Fawcett. 2006. An introduction to ROC analysis. Pattern recognition letters 27, 8 (2006), 861–874.
1706.06978#48
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
49
10 u 2 13 4 1s (a) The top face of Cb l1, . . . , ln. for the example input (b) The top face of Ct l1, . . . , ln. for the example input Figure 5: The coloring of the Rubik’s Square for the example input l1, . . . , ln. By Lemma 2.1, this will immediately also imply the following corollary: Corollary 4.7. If (t, k) is a “yes” instance to the Group Rubik’s Square problem, then l1, . . . , ln is a “yes” instance to the Promise Cubical Hamiltonian Path problem. To prove the theorem, we consider a hypothetical solution to the (C;, &) instance of the Rubik’s Square problem. A solution consists of a sequence of Rubik’s Square moves m1,...,™m,: with k’ < k such that C’ = (my o-++0m 1)(C;) is a solved configuration of the Rubik’s Square. Throughout the proof, we will use only the fact that move sequence m1,..., mx solves the top and bottom faces of the Rubik’s Square in configuration C;.
1706.06708#49
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
49
[44] N. Passalis and A. Tefas, “Learning neural bag-of-features for large scale image retrieval,” IEEE Trans. Cybernetics, 2017. [45] A. Richard and J. Gall, “A bag-of-words equivalent recurrent neural network for action recognition,” in BMVC, 2015. [46] K. Simonyan, A. Vedaldi, and A. Zisserman, “Deep fisher networks for large-scale image classification,” in NIPS, 2013. [47] M. Douze, J. Revaud, C. Schmid, and H. J´egou, “Stable hyper-pooling and query expansion for event detection,” in ICCV, 2013.
1706.06905#49
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
49
tables 1 3 trajectories 268 268 arm conf. 43 43 base conf. 124 323 total conf. 5332 13889 virtual conf. 15 15 virtual GP 42 42 relative conf. 1081 3379 real conf. 136 393 Time(min.) 5 13 TABLE I: Compilation data for one and three tables. Columns show the number of tables, total number of arm trajectories, arms configurations, base configurations, total number of robot configurations, virtual object configurations, number of virtual grasping poses, relative object configurations, total number of real object configurations and overall compilation time.
1706.06927#49
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]
1706.06978
49
[8] Tom Fawcett. 2006. An introduction to ROC analysis. Pattern recognition letters 27, 8 (2006), 861–874. [9] Kun Gai, Xiaoqiang Zhu, et al. 2017. Learning Piece-wise Linear Models from Large Scale Data for Ad Click Prediction. arXiv preprint arXiv:1704.05194 (2017). [10] Huifeng Guo, Ruiming Tang, et al. 2017. DeepFM: A Factorization-Machine based Neural Network for CTR Prediction. In Proceedings of the 26th International Joint Conference on Artificial Intelligence. 1725–1731.
1706.06978#49
Deep Interest Network for Click-Through Rate Prediction
Click-through rate prediction is an essential task in industrial applications, such as online advertising. Recently deep learning based models have been proposed, which follow a similar Embedding\&MLP paradigm. In these methods large scale sparse input features are first mapped into low dimensional embedding vectors, and then transformed into fixed-length vectors in a group-wise manner, finally concatenated together to fed into a multilayer perceptron (MLP) to learn the nonlinear relations among features. In this way, user features are compressed into a fixed-length representation vector, in regardless of what candidate ads are. The use of fixed-length vector will be a bottleneck, which brings difficulty for Embedding\&MLP methods to capture user's diverse interests effectively from rich historical behaviors. In this paper, we propose a novel model: Deep Interest Network (DIN) which tackles this challenge by designing a local activation unit to adaptively learn the representation of user interests from historical behaviors with respect to a certain ad. This representation vector varies over different ads, improving the expressive ability of model greatly. Besides, we develop two techniques: mini-batch aware regularization and data adaptive activation function which can help training industrial deep networks with hundreds of millions of parameters. Experiments on two public datasets as well as an Alibaba real production dataset with over 2 billion samples demonstrate the effectiveness of proposed approaches, which achieve superior performance compared with state-of-the-art methods. DIN now has been successfully deployed in the online display advertising system in Alibaba, serving the main traffic.
http://arxiv.org/pdf/1706.06978
Guorui Zhou, Chengru Song, Xiaoqiang Zhu, Ying Fan, Han Zhu, Xiao Ma, Yanghui Yan, Junqi Jin, Han Li, Kun Gai
stat.ML, cs.LG, I.2.6; H.3.2
Accepted by KDD 2018
null
stat.ML
20170621
20180913
[ { "id": "1704.05194" } ]
1706.06708
50
The main idea of the proof relies on three major steps. In the first step, we show that m1,..., mx must flip row i an odd number of times if i € {1,...,n}, and an even number of times otherwise. We then define set O C {1,...,n} (where O stands for “one”) to be the set of indices i such that there is exactly one index-i row move. Clearly, in order to satisfy the parity constraints, every i € O must have one row 7 move and zero row —i moves in m,..., mr. The s scond step of the 2 € O, then the number of column moves in m1,..., mz: between the single flip of row i; and the single flip of row iz is at least the Hamming distance between |;, and l proof is to show that, if 7 2° The final step of the proof is a counting argument. There are four types of moves in m1,..., Mr: 1. index-i row moves with i ∈ O (all of which are positive moves as shown above), 2. index-i row moves with i ∈ {1, . . . , n} \ O, 3. column moves, and 4. index-i row moves with i ¢ {1,...,n}. 14
1706.06708#50
Solving the Rubik's Cube Optimally is NP-complete
In this paper, we prove that optimally solving an $n \times n \times n$ Rubik's Cube is NP-complete by reducing from the Hamiltonian Cycle problem in square grid graphs. This improves the previous result that optimally solving an $n \times n \times n$ Rubik's Cube with missing stickers is NP-complete. We prove this result first for the simpler case of the Rubik's Square---an $n \times n \times 1$ generalization of the Rubik's Cube---and then proceed with a similar but more complicated proof for the Rubik's Cube case.
http://arxiv.org/pdf/1706.06708
Erik D. Demaine, Sarah Eisenstat, Mikhail Rudoy
cs.CC, cs.CG, math.CO, F.1.3
35 pages, 8 figures
null
cs.CC
20170621
20180427
[]
1706.06905
50
[48] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G. S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viegas, O. Vinyals, P. Warden, M. Wat- tenberg, M. Wicke, Y. Yu, and X. Zheng, “Tensorflow: Large-scale machine learning on heterogeneous distributed systems,” arXiv preprint arXiv:1603.04467, 2015.
1706.06905#50
Learnable pooling with Context Gating for video classification
Current methods for video analysis often extract frame-level features using pre-trained convolutional neural networks (CNNs). Such features are then aggregated over time e.g., by simple temporal averaging or more sophisticated recurrent neural networks such as long short-term memory (LSTM) or gated recurrent units (GRU). In this work we revise existing video representations and study alternative methods for temporal aggregation. We first explore clustering-based aggregation layers and propose a two-stream architecture aggregating audio and visual features. We then introduce a learnable non-linear unit, named Context Gating, aiming to model interdependencies among network activations. Our experimental results show the advantage of both improvements for the task of video classification. In particular, we evaluate our method on the large-scale multi-modal Youtube-8M v2 dataset and outperform all other methods in the Youtube 8M Large-Scale Video Understanding challenge.
http://arxiv.org/pdf/1706.06905
Antoine Miech, Ivan Laptev, Josef Sivic
cs.CV
Presented at Youtube 8M CVPR17 Workshop. Kaggle Winning model. Under review for TPAMI
null
cs.CV
20170621
20180305
[ { "id": "1502.03167" }, { "id": "1602.07261" }, { "id": "1706.05150" }, { "id": "1609.08675" }, { "id": "1706.06905" }, { "id": "1603.04467" }, { "id": "1706.04572" }, { "id": "1707.00803" }, { "id": "1612.08083" }, { "id": "1707.04555" }, { "id": "1709.01507" } ]
1706.06927
50
#o 10 10 10 15 15 15 20 20 20 25 25 25 30 30 30 #g 1 2 3 1 2 3 1 2 3 1 2 3 1 2 3 #c 4 6 8 6 8 10 12 14 22 4 4 18 4 38 38 L 38 67 73 49 81 80 86 122 159 22 45 MO 22 MO TO E 700 5.7k 6.1k 778 9.8k 7.7k 39k 63.3k 49.2k 206 39.1k - 67.6k - - Prep 2.4 2.42 2.22 3.4 3.76 4.13 5.44 5.85 5.66 7.42 7.29 - 9.21 - - Search 0.08 0.64 0.72 0.1 1.27 0.97 4.46 9.42 7.26 0.03 5.54 - 10.16 - - Total 2.48 3.06 2.94 3.5 5.03 5.1 9.9 15.27 12.92 7.45 12.83 - 19.37 - #o 10 10 10 10 20 20 20 20 25 25 25 25 30 30 30 30 40 40 40 40 #g 2 4 6 8 2 4 6 8 2 4 6 8 2 4 6 8 2 4 6 8 #c 6 2 2 2 4 4 6 8 8 8 10 12 4 2 8 10 4 14 10 14 L 54 101 121 150 65
1706.06927#50
Combined Task and Motion Planning as Classical AI Planning
Planning in robotics is often split into task and motion planning. The high-level, symbolic task planner decides what needs to be done, while the motion planner checks feasibility and fills up geometric detail. It is known however that such a decomposition is not effective in general as the symbolic and geometrical components are not independent. In this work, we show that it is possible to compile task and motion planning problems into classical AI planning problems; i.e., planning problems over finite and discrete state spaces with a known initial state, deterministic actions, and goal states to be reached. The compilation is sound, meaning that classical plans are valid robot plans, and probabilistically complete, meaning that valid robot plans are classical plans when a sufficient number of configurations is sampled. In this approach, motion planners and collision checkers are used for the compilation, but not at planning time. The key elements that make the approach effective are 1) expressive classical AI planning languages for representing the compiled problems in compact form, that unlike PDDL make use of functions and state constraints, and 2) general width-based search algorithms capable of finding plans over huge combinatorial spaces using weak heuristics only. Empirical results are presented for a PR2 robot manipulating tens of objects, for which long plans are required.
http://arxiv.org/pdf/1706.06927
Jonathan Ferrer-Mestres, Guillem Francès, Hector Geffner
cs.RO, cs.AI
10 pages, 2 figures
null
cs.RO
20170621
20170621
[]