doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1704.01696 | 65 | reason Partially implemented effect: only deal 3 damage to opponentâs characters
input <name> Darkscale Healer </name> <cost> 5 </cost> <attack> 4 </attack> <defense> 5 </defense> <desc>
Battlecry: Restore 2 Health to all friendly characters. </desc> <rarity> Common </rarity>... pred. class DarkscaleHealer(MinionCard) : def _init_(self): super ().__init__(âDarkscale Healerâ, 5, CHARACTER-CLASS.ALL, CARD-RARITY.COMMON, battlecry=Battlecry (Damage (2) , CharacterSelector(players=BothPlayer(), picker=UserPicker()))) def create_minion(self, player): return Minion(4, 5) X ref. class DarkscaleHealer(MinionCard) : def _init_(self): super ().__init_(âDarkscale Healerâ, 5, CHARACTER-CLASS.ALL, CARD-RARITY.COMMON, battlecry=Battlecry(Heal(2), CharacterSelector())) def create_minion(self, player): return Minion(4, 5) reason Incorrect effect: damage 2 health instead of restoring. Cast effect to all players instead of friendly players only. | 1704.01696#65 | A Syntactic Neural Model for General-Purpose Code Generation | We consider the problem of parsing natural language descriptions into source
code written in a general-purpose programming language like Python. Existing
data-driven methods treat this problem as a language generation task without
considering the underlying syntax of the target programming language. Informed
by previous work in semantic parsing, in this paper we propose a novel neural
architecture powered by a grammar model to explicitly capture the target syntax
as prior knowledge. Experiments find this an effective way to scale up to
generation of complex programs from natural language descriptions, achieving
state-of-the-art results that well outperform previous code generation and
semantic parsing approaches. | http://arxiv.org/pdf/1704.01696 | Pengcheng Yin, Graham Neubig | cs.CL, cs.PL, cs.SE | To appear in ACL 2017 | null | cs.CL | 20170406 | 20170406 | [] |
1704.01444 | 0 | 7 1 0 2
r p A 6 ] G L . s c [
2 v 4 4 4 1 0 . 4 0 7 1 : v i X r a
# Learning to Generate Reviews and Discovering Sentiment
# Alec Radford 1 Rafal Jozefowicz 1 Ilya Sutskever 1
Abstract We explore the properties of byte-level recur- rent language models. When given sufï¬cient amounts of capacity, training data, and compute time, the representations learned by these models include disentangled features corresponding to high-level concepts. Speciï¬cally, we ï¬nd a single unit which performs sentiment analysis. These representations, learned in an unsupervised man- ner, achieve state of the art on the binary subset of the Stanford Sentiment Treebank. They are also very data efï¬cient. When using only a handful of labeled examples, our approach matches the performance of strong baselines trained on full datasets. We also demonstrate the sentiment unit has a direct inï¬uence on the generative process of the model. Simply ï¬xing its value to be pos- itive or negative generates samples with the cor- responding positive or negative sentiment.
it is now commonplace to reuse these representations on a broad suite of related tasks - one of the most successful examples of transfer learning to date (Oquab et al., 2014). | 1704.01444#0 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 1 | it is now commonplace to reuse these representations on a broad suite of related tasks - one of the most successful examples of transfer learning to date (Oquab et al., 2014).
There is also a long history of unsupervised representation learning (Olshausen & Field, 1997). Much of the early re- search into modern deep learning was developed and val- idated via this approach (Hinton & Salakhutdinov, 2006) (Huang et al., 2007) (Vincent et al., 2008) (Coates et al., 2010) (Le, 2013). Unsupervised learning is promising due to its ability to scale beyond only the subsets and domains of data that can be cleaned and labeled given resource, pri- vacy, or other constraints. This advantage is also its difï¬- culty. While supervised approaches have clear objectives that can be directly optimized, unsupervised approaches rely on proxy tasks such as reconstruction, density estima- tion, or generation, which do not directly encourage useful representations for speciï¬c tasks. As a result, much work has gone into designing objectives, priors, and architectures meant to encourage the learning of useful representations. We refer readers to Goodfellow et al. (2016) for a detailed review.
# 1. Introduction and Motivating Work | 1704.01444#1 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 2 | # 1. Introduction and Motivating Work
Representation learning (Bengio et al., 2013) plays a crit- ical role in many modern machine learning systems. Rep- resentations map raw data to more useful forms and the choice of representation is an important component of any application. Broadly speaking, there are two areas of re- search emphasizing different details of how to learn useful representations.
The supervised training of high-capacity models on large labeled datasets is critical to the recent success of deep learning techniques for a wide range of applications such as image classiï¬cation (Krizhevsky et al., 2012), speech recognition (Hinton et al., 2012), and machine transla- tion (Wu et al., 2016). Analysis of the task speciï¬c rep- resentations learned by these models reveals many fasci- Image classiï¬ers nating properties (Zhou et al., 2014). learn a broadly useful hierarchy of feature detectors re- representing raw pixels as edges, textures, and objects (Zeiler & Fergus, 2014). In the ï¬eld of computer vision,
1OpenAI, San Francisco, California, USA. Correspondence to: Alec Radford <[email protected]>. | 1704.01444#2 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 3 | 1OpenAI, San Francisco, California, USA. Correspondence to: Alec Radford <[email protected]>.
Despite these difï¬culties, there are notable applications of unsupervised learning. Pre-trained word vectors are a vi- tal part of many modern NLP systems (Collobert et al., 2011). These representations, learned by modeling word co-occurrences, increase the data efï¬ciency and general- ization capability of NLP systems (Pennington et al., 2014) (Chen & Manning, 2014). Topic modelling can also dis- cover factors within a corpus of text which align to human interpretable concepts such as art or education (Blei et al., 2003). | 1704.01444#3 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 4 | How to learn representations of phrases, sentences, and Inspired by the documents is an open area of research. success of word vectors, Kiros et al. (2015) propose skip- thought vectors, a method of training a sentence encoder by predicting the preceding and following sentence. The representation learned by this objective performs competi- tively on a broad suite of evaluated tasks. More advanced training techniques such as layer normalization (Ba et al., 2016) further improve results. However, skip-thought vec- tors are still outperformed by supervised models which di- rectly optimize the desired performance metric on a spe- ciï¬c dataset. This is the case for both text classiï¬cation
Generating Reviews and Discovering Sentiment
tasks, which measure whether a speciï¬c concept is well en- coded in a representation, and more general semantic sim- ilarity tasks. This occurs even when the datasets are rela- tively small by modern standards, often consisting of only a few thousand labeled examples. | 1704.01444#4 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 5 | In contrast to learning a generic representation on one large dataset and then evaluating on other tasks/datasets, Dai & Le (2015) proposed using similar unsupervised objec- tives such as sequence autoencoding and language model- ing to ï¬rst pretrain a model on a dataset and then ï¬netune it for a given task. This approach outperformed training the same model from random initialization and achieved state of the art on several text classiï¬cation datasets. Combin- ing language modelling with topic modelling and ï¬tting a small supervised feature extractor on top has also achieved strong results on in-domain document level sentiment anal- ysis (Dieng et al., 2016).
tation to various degrees of out-of-domain data and tasks.
# 2. Dataset | 1704.01444#5 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 6 | tation to various degrees of out-of-domain data and tasks.
# 2. Dataset
Much previous work on language modeling has evaluated on relatively small but competitive datasets such as Penn Treebank (Marcus et al., 1993) and Hutter Prize Wikipedia (Hutter, 2006). As discussed in Jozefowicz et al. (2016) performance on these datasets is primarily dominated by regularization. Since we are interested in high-quality sen- timent representations, we chose the Amazon product re- view dataset introduced in McAuley et al. (2015) as a train- ing corpus. In de-duplicated form, this dataset contains over 82 million product reviews from May 1996 to July 2014 amounting to over 38 billion training bytes. Due to the size of the dataset, we ï¬rst split it into 1000 shards con- taining equal numbers of reviews and set aside 1 shard for validation and 1 shard for test. | 1704.01444#6 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 7 | Considering this, we hypothesize two effects may be com- bining to result in the weaker performance of purely unsu- pervised approaches. Skip-thought vectors were trained on a corpus of books. But some of the classiï¬cation tasks they are evaluated on, such as sentiment analysis of reviews of consumer goods, do not have much overlap with the text of novels. We propose this distributional issue, combined with the limited capacity of current models, results in represen- tational underï¬tting. Current generic distributed sentence representations may be very lossy - good at capturing the gist, but poor with the precise semantic or syntactic details which are critical for applications.
The experimental and evaluation protocols may be under- estimating the quality of unsupervised representation learn- ing for sentences and documents due to certain seemingly insigniï¬cant design decisions. Hill et al. (2016) also raises concern about current evaluation tasks in their recent work which provides a thorough survey of architectures and ob- jectives for learning unsupervised sentence representations - including the above mentioned skip-thoughts.
1.40 â LSTM (valid) --=+ LSTM (train) ââ mLSTM (valid) <== mLSTM (train) 1.35 a 8 bits per character 8 115 1105 200000 400000 600000 800000 # of updates 1000000 | 1704.01444#7 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 8 | In this work, we test whether this is the case. We focus in on the task of sentiment analysis and attempt to learn an unsupervised representation that accurately contains this concept. Mikolov et al. (2013) showed that word-level re- current language modelling supports the learning of useful word vectors and we are interested in pushing this line of work. As an approach, we consider the popular research benchmark of byte (character) level language modelling due to its further simplicity and generality. We are also in- terested in evaluating this approach as it is not immediately clear whether such a low-level training objective supports the learning of high-level representations. We train on a very large corpus picked to have a similar distribution as our task of interest. We also benchmark on a wider range of tasks to quantify the sensitivity of the learned represenFigure 1. The mLSTM converges faster and achieves a better re- sult within our time budget compared to a standard LSTM with the same hidden state size
# 3. Model and Training Details | 1704.01444#8 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 9 | # 3. Model and Training Details
Many potential recurrent architectures and hyperparameter settings were considered in preliminary experiments on the dataset. Given the size of the dataset, searching the wide space of possible conï¬gurations is quite costly. To help alleviate this, we evaluated the generative performance of smaller candidate models after a single pass through the dataset. The model chosen for the large scale experiment is a single layer multiplicative LSTM (Krause et al., 2016) with 4096 units. We observed multiplicative LSTMs to converge faster than normal LSTMs for the hyperparamGenerating Reviews and Discovering Sentiment | 1704.01444#9 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 10 | eter settings that were explored both in terms of data and wall-clock time. The model was trained for a single epoch on mini-batches of 128 subsequences of length 256 for a total of 1 million weight updates. States were initialized to zero at the beginning of each shard and persisted across updates to simulate full-backpropagation and allow for the forward propagation of information outside of a given sub- sequence. Adam (Kingma & Ba, 2014) was used to ac- celerate learning with an initial 5e-4 learning rate that was decayed linearly to zero over the course of training. Weight normalization (Salimans & Kingma, 2016) was applied to the LSTM parameters. Data-parallelism was used across 4 Pascal Titan X gpus to speed up training and increase effec- tive memory size. Training took approximately one month. The model is compact, containing approximately as many parameters as there are reviews in the training dataset. It also has a high ratio of compute to total parameters com- pared to other large scale language models due to operating at a byte level. The selected model reaches 1.12 bits per byte.
Table 1. Small dataset classiï¬cation accuracies | 1704.01444#10 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 11 | Table 1. Small dataset classiï¬cation accuracies
METHOD MR CR SUBJ MPQA NBSVM [49] SKIPTHOUGHT [23] SKIPTHOUGHT(LN) SDAE [12] CNN [21] ADASENT [56] BYTE MLSTM 79.4 77.3 79.5 74.6 81.5 83.1 86.9 81.8 81.8 83.1 78.0 85.0 86.3 91.4 93.2 92.6 93.7 90.8 93.4 95.5 94.6 86.3 87.9 89.3 86.9 89.6 93.3 88.5
# 4. Experimental Setup and Results
Our model processes text as a sequence of UTF-8 encoded bytes (Yergeau, 2003). For each byte, the model updates its hidden state and predicts a probability distribution over the next possible byte. The hidden state of the model serves as an online summary of the sequence which encodes all information the model has learned to preserve that is rele- vant to predicting the future bytes of the sequence. We are interested in understanding the properties of the learned en- coding. The process of extracting a feature representation is outlined as follows: | 1704.01444#11 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 12 | 94 92. byte mLSTM (ours) 574 90 CT-LSTM ensemble Neural Semantic Encoder Paragram-SL999 LSTM. Test Accuracy Dynamic Memory Network CNN multichannel Recurrent Neural Tensor Network Li Regularized L2 Regularized 84 107 10? 10? Labeled Training Examples
Figure 2. Performance on the binary version of SST as a function of labeled training examples. The solid lines indicate the aver- age of 100 runs while the sharded regions indicate the 10th and 90th percentiles. Previous results on the dataset are plotted as dashed lines with the numbers indicating the amount of examples required for logistic regression on the byte mLSTM representa- tion to match their performance. RNTN (Socher et al., 2013) CNN (Kim, 2014) DMN (Kumar et al., 2015) LSTM (Wieting et al., 2015) NSE (Munkhdalai & Yu, 2016) CT-LSTM (Looks et al., 2017)
⢠Since newlines are used as review delimiters in the training dataset, all newline characters are replaced with spaces to avoid the model resetting state.
⢠Any leading whitespace is removed and replaced with a newline+space to simulate a start token. Any trailing whitespace is removed and replaced with a space to simulate an end token. The text is encoded as a UTF- 8 byte sequence. | 1704.01444#12 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 13 | ⢠Model states are initialized to zeros. The model pro- cesses the sequence and the ï¬nal cell states of the mL- STM are used as a feature representation. Tanh is ap- plied to bound values between -1 and 1.
We follow the methodology established in Kiros et al. (2015) by training a logistic regression classiï¬er on top of our modelâs representation on datasets for tasks including semantic relatedness, text classiï¬cation, and paraphrase de- tection. For the details on these comparison experiments, we refer the reader to their work. One exception is that we use an L1 penalty for text classiï¬cation results instead of L2 as we found this performed better in the very low data regime.
# 4.1. Review Sentiment Analysis
Table 1 shows the results of our model on 4 standard text classiï¬cation datasets. The performance of our model is noticeably lopsided. On the MR (Pang & Lee, 2005) and
Generating Reviews and Discovering Sentiment | 1704.01444#13 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 14 | Generating Reviews and Discovering Sentiment
CR (Hu & Liu, 2004) sentiment analysis datasets we im- prove the state of the art by a signiï¬cant margin. The MR and CR datasets are sentences extracted from Rotten Toma- toes, a movie review website, and Amazon product reviews (which almost certainly overlaps with our training corpus). This suggests that our model has learned a rich represen- tation of text from a similar domain. On the other two datasets, SUBJâs subjectivity/objectivity detection (Pang & Lee, 2004) and MPQAâs opinion polarity (Wiebe et al., 2005) our model has no noticeable advantage over other unsupervised representation learning approaches and is still outperformed by a supervised approach. | 1704.01444#14 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 15 | To better quantify the learned representation, we also test on a wider set of sentiment analysis datasets with differ- ent properties. The Stanford Sentiment Treebank (SST) (Socher et al., 2013) was created speciï¬cally to evaluate more complex compositional models of language. It is de- rived from the same base dataset as MR but was relabeled via Amazon Mechanical and includes dense labeling of the phrases of parse trees computed for all sentences. For the binary subtask, this amounts to 76961 total labels com- pared to the 6920 sentence level labels. As a demonstration of the capability of unsupervised representation learning to simplify data collection and remove preprocessing steps, our reported results ignore these dense labels and computed parse trees, using only the raw text and sentence level la- bels. | 1704.01444#15 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 16 | The representation learned by our model achieves 91.8% signiï¬cantly outperforming the state of the art of 90.2% by a 30 model ensemble (Looks et al., 2017). As visualized in Figure 2, our model is very data efï¬cient. It matches the performance of baselines using as few as a dozen la- beled examples and outperforms all previous results with only a few hundred labeled examples. This is under 10% of the total sentences in the dataset. Confusingly, despite a 16% relative error reduction on the binary subtask, it does not reach the state of the art of 53.6% on the ï¬ne-grained subtask, achieving 52.9%.
# 4.2. Sentiment Unit
Table 2. IMDB sentiment classiï¬cation
METHOD ERROR FULLUNLABELEDBOW (MAAS ET AL., 2011) NB-SVM TRIGRAM (MESNIL ET AL., 2014) SENTIMENT UNIT (OURS) SA-LSTM (DAI & LE, 2015) BYTE MLSTM (OURS) TOPICRNN (DIENG ET AL., 2016) VIRTUAL ADV (MIYATO ET AL., 2016) 11.11% 8.13% 7.70% 7.24% 7.12% 6.24% 5.91% | 1704.01444#16 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 18 | sentations our model learned and how they achieve the ob- served data efï¬ciency. The beneï¬t of an L1 penalty in the low data regime (see Figure 2) is a clue. L1 regulariza- tion is known to reduce sample complexity when there are many irrelevant features (Ng, 2004). This is likely to be the case for our model since it is trained as a language model and not as a supervised feature extractor. By inspecting the relative contributions of features on various datasets, we discovered a single unit within the mLSTM that directly corresponds to sentiment. In Figure 3 we show the his- togram of the ï¬nal activations of this unit after processing IMDB reviews (Maas et al., 2011) which shows a bimodal distribution with a clear separation between positive and negative reviews. In Figure 4 we visualize the activations of this unit on 6 randomly selected reviews from a set of 100 high contrast reviews which shows it acts as an on- line estimate of the local sentiment of the review. Fitting a threshold to this single unit achieves a test accuracy of 92.30% which outperforms a strong supervised results on the dataset, the 91.87% of NB-SVM trigram | 1704.01444#18 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 19 | unit achieves a test accuracy of 92.30% which outperforms a strong supervised results on the dataset, the 91.87% of NB-SVM trigram (Mesnil et al., 2014), but is still below the semi-supervised state of the art of 94.09% (Miyato et al., 2016). Using the full 4096 unit representation achieves 92.88%. This is an improvement of only 0.58% over the sentiment unit suggesting that almost all information the model retains that is relevant to senti- ment analysis is represented in the very compact form of a single scalar. Table 2 has a full list of results on the IMDB dataset. | 1704.01444#19 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 20 | # 4.3. Capacity Ceiling
Encouraged by these results, we were curious how well the modelâs representation scales to larger datasets. We try our approach on the binary version of the Yelp Dataset
Generating Reviews and Discovering Sentiment
to the point of being ridiculous.
# I
found this to be a charmil the cast is wond
Tyne Daly's performance, though I'm not generally a fan of her work. Finally, , especially the dorky three in the bar. The movie is suitable for the whole fami
# Judy
Tt never happened. In "It Should Happen to You" (I can't think of a blander title, by the way), Holliday does yet one more variation on the dumb blonde who's maybe not so duml
mle a i an ce
100 98 5 96 5 944 Test Accuracy 924 90 10! 102 10? 104 10° Labeled Training Examples
Figure 5. Performance on the binary version of the Yelp reviews dataset as a function of labeled training examples. The modelâs performance plateaus after about ten labeled examples and only slow improves with additional data.
Team Spirit it it misses the warmth o yf
Table 3. Microsoft Paraphrase Corpus
God bless this made for TV sequel, | 1704.01444#20 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 21 | Team Spirit it it misses the warmth o yf
Table 3. Microsoft Paraphrase Corpus
God bless this made for TV sequel,
METHOD ACC F1 SKIPTHOUGHT (KIROS ET AL., 2015) SDAE (HILL ET AL., 2016) MTMETRICS [31] BYTE MLSTM 73.0 76.4 77.4 75.0 82.0 83.4 84.1 82.8
Figure 4. Visualizing the value of the sentiment cell as it processes six randomly selected high contrast IMDB reviews. Red indicates negative sentiment while green indicates positive sentiment. Best seen in color.
Challenge in 2015 as introduced in Zhang et al. (2015). This dataset contains 598,000 examples which is an or- der of magnitude larger than any other datasets we tested on. When visualizing performance as a function of number of training examples in Figure 5, we observe a âcapacity ceilingâ where the test accuracy of our approach only im- proves by a little over 1% across a four order of magnitude increase in training data. Using the full dataset, we achieve 95.22% test accuracy. This better than a BoW TFIDF base- line at 93.66% but slightly worse than the 95.64% of a lin- ear classiï¬er on top of the 500,000 most frequent n-grams up to length 5. | 1704.01444#21 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 22 | The observed capacity ceiling is an interesting phenomena and stumbling point for scaling our unsupervised represen- tations. We think a variety of factors are contributing to cause this. Since our model is trained only on Amazon reviews, it is does not appear to be sensitive to concepts speciï¬c to other domains. For instance, Yelp reviews are of
Table 4. SICK semantic relatedness subtask
METHOD r Ï MSE SKIPTHOUGHT [23] SKIPTHOUGHT(LN) TREE-LSTM [47] BYTE MLSTM 0.858 0.858 0.868 0.792 0.792 0.788 0.808 0.725 0.269 0.270 0.253 0.390 | 1704.01444#22 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 23 | businesses, where details like hospitality, location, and at- mosphere are important. But these ideas are not present in reviews of products. Additionally, there is a notable drop in the relative performance of our approach transitioning from sentence to document datasets. This is likely due to our model working on the byte level which leads to it fo- cusing on the content of the last few sentences instead of the whole document. Finally, as the amount of labeled data increases, the performance of the simple linear model we train on top of our static representation will eventually satu- rate. Complex models explicitly trained for a task can con- tinue to improve and eventually outperform our approach with enough labeled data.
With this context, the observed results make a lot of sense.
Generating Reviews and Discovering Sentiment | 1704.01444#23 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 24 | Sentiment ï¬xed to positive Sentiment ï¬xed to negative Just what I was looking for. Nice ï¬tted pants, exactly matched seam to color contrast with other pants I own. Highly recommended and also very happy! The package received was blank and has no barcode. A waste of time and money. This product does what it is supposed to. I always keep three of these in my kitchen just in case ever I need a replacement cord. Great little item. Hard to put on the crib without some kind of embellishment. My guess is just like the screw kind of attachment I had. Best hammock ever! Stays in place and holds itâs shape. Comfy (I love the deep neon pictures on it), and looks so cute. They didnât ï¬t either. Straight high sticks at the end. On par with other buds I have. Lesson learned to avoid. Dixie is getting her Doolittle newsletter weâll see another new one coming out next year. Great stuff. And, hereâs the contents - information that we hardly know about or forget. great product but no seller. couldnât ascertain a cause. Broken product. I am a proliï¬c consumer of this company all | 1704.01444#24 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 26 | Like the cover, Fits good. . However, an annoying rear piece like garbage should be out of this one. I bought this hoping it would help with a huge pull down my back & the black just doesnât stay. Scrap off everytime I use it.... Very disappointed.
Table 5. Random samples from the model generated when the value of sentiment hidden state is ï¬xed to either -1 or 1 for all steps. The sentiment unit has a strong inï¬uence on the modelâs generative process.
On a small sentence level dataset of a known domain (the movie reviews of Stanford Sentiment Treebank) our model sets a new state of the art. But on a large, document level dataset of a different domain (the Yelp reviews) it is only competitive with standard baselines.
# 4.4. Other Tasks | 1704.01444#26 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 27 | # 4.4. Other Tasks
Besides classiï¬cation, we also evaluate on two other stan- dard tasks: semantic relatedness and paraphrase detection. While our model performs competitively on Microsoft Re- search Paraphrase Corpus (Dolan et al., 2004) in Table 3, it performs poorly on the SICK semantic relatedness task (Marelli et al., 2014) in Table 4. It is likely that the form and content of the semantic relatedness task, which is built on top of descriptions of images and videos and contains sentences such as âA sea turtle is hunting for ï¬shâ is ef- fectively out-of-domain for our model which has only been trained on the text of product reviews.
# 4.5. Generative Analysis
Although the focus of our analysis has been on the prop- erties of our modelâs representation, it is trained as a gen- erative model and we are also interested in its generative capabilities. Hu et al. (2017) and Dong et al. (2017) both designed conditional generative models to disentangle the content of text from various attributes like sentiment or | 1704.01444#27 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 28 | tense. We were curious whether a similar result could be achieved using the sentiment unit. In Table 5 we show that by simply setting the sentiment unit to be positive or neg- ative, the model generates corresponding positive or nega- tive reviews. While all sampled negative reviews contain sentences with negative sentiment, they sometimes contain sentences with positive sentiment as well. This might be reï¬ective of the bias of the training corpus which contains over 5x as many ï¬ve star reviews as one star reviews. Nev- ertheless, it is interesting to see that such a simple manipu- lation of the modelâs representation has a noticeable effect on its behavior. The samples are also high quality for a byte level language model and often include valid sentences.
# 5. Discussion and Future Work | 1704.01444#28 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 29 | # 5. Discussion and Future Work
It is an open question why our model recovers the con- cept of sentiment in such a precise, disentangled, inter- pretable, and manipulable way. It is possible that senti- ment as a conditioning feature has strong predictive capa- bility for language modelling. This is likely since senti- ment is such an important component of a review. Previous work analysing LSTM language models showed the exis- tence of interpretable units that indicate position within a line or presence inside a quotation (Karpathy et al., 2015). In many ways, the sentiment unit in this model is just a scaled up example of the same phenomena. The update equation of an LSTM could play a role. The element-wise
Generating Reviews and Discovering Sentiment
operation of its gates may encourage axis-aligned repre- sentations. Models such as word2vec have also been ob- served to have small subsets of dimensions strongly asso- ciated with speciï¬c tasks (Li et al., 2016). | 1704.01444#29 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 30 | Our work highlights the sensitivity of learned representa- tions to the data distribution they are trained on. The results make clear that it is unrealistic to expect a model trained on a corpus of books, where the two most common gen- res are Romance and Fantasy, to learn an encoding which preserves the exact sentiment of a review. Likewise, it is unrealistic to expect a model trained on Amazon product reviews to represent the precise semantic content of a cap- tion of an image or a video.
There are several promising directions for future work highlighted by our results. The observed performance plateau, even on relatively similar domains, suggests im- proving the representation model both in terms of architec- ture and size. Since our model operates at the byte-level, hierarchical/multi-timescale extensions could improve the quality of representations for longer documents. The sen- sitivity of learned representations to their training domain could be addressed by training on a wider mix of datasets with better coverage of target tasks. Finally, our work encourages further research into language modelling as it demonstrates that the standard language modelling objec- tive with no modiï¬cations is sufï¬cient to learn high-quality representations.
of Machine Learning Research, 12(Aug):2493â2537, 2011. | 1704.01444#30 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 31 | of Machine Learning Research, 12(Aug):2493â2537, 2011.
Dai, Andrew M and Le, Quoc V. Semi-supervised sequence learning. In Advances in Neural Information Processing Systems, pp. 3079â3087, 2015.
Dieng, Adji B, Wang, Chong, Gao, Jianfeng, and Pais- ley, John. Topicrnn: A recurrent neural network with long-range semantic dependency. arXiv preprint arXiv:1611.01702, 2016.
Dolan, Bill, Quirk, Chris, and Brockett, Chris. Unsuper- vised construction of large paraphrase corpora: Exploit- ing massively parallel news sources. In Proceedings of the 20th international conference on Computational Lin- guistics, pp. 350. Association for Computational Lin- guistics, 2004.
Dong, Li, Huang, Shaohan, Wei, Furu, Lapata, Mirella, Zhou, Ming, and Ke, Xu. Learning to generate prod- uct reviews from attributes. In Proceedings of the 15th Conference of the European Chapter of the Association for Computational Linguistics, pp. 623â632. Associa- tion for Computational Linguistics, 2017.
Goodfellow, Ian, Bengio, Yoshua, and Courville, Aaron. Deep learning. 2016. | 1704.01444#31 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 32 | Goodfellow, Ian, Bengio, Yoshua, and Courville, Aaron. Deep learning. 2016.
Hill, Felix, Cho, Kyunghyun, and Korhonen, Anna. Learn- ing distributed representations of sentences from unla- belled data. arXiv preprint arXiv:1602.03483, 2016.
# References
Ba, Jimmy Lei, Kiros, Jamie Ryan, and Hinton, Ge- arXiv preprint offrey E. arXiv:1607.06450, 2016. Layer normalization.
Bengio, Yoshua, Courville, Aaron, and Vincent, Pascal. Representation learning: A review and new perspectives. IEEE transactions on pattern analysis and machine in- telligence, 35(8):1798â1828, 2013.
Blei, David M, Ng, Andrew Y, and Jordan, Michael I. La- tent dirichlet allocation. Journal of machine Learning research, 3(Jan):993â1022, 2003.
Chen, Danqi and Manning, Christopher D. A fast and accurate dependency parser using neural networks. In EMNLP, pp. 740â750, 2014.
Coates, Adam, Lee, Honglak, and Ng, Andrew Y. An analysis of single-layer networks in unsupervised feature learning. Ann Arbor, 1001(48109):2, 2010. | 1704.01444#32 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 33 | Collobert, Ronan, Weston, Jason, Bottou, L´eon, Karlen, Michael, Kavukcuoglu, Koray, and Kuksa, Pavel. Natu- ral language processing (almost) from scratch. Journal
Hinton, Geoffrey, Deng, Li, Yu, Dong, Dahl, George E, Mohamed, Abdel-rahman, Jaitly, Navdeep, Senior, An- drew, Vanhoucke, Vincent, Nguyen, Patrick, Sainath, Tara N, et al. Deep neural networks for acoustic mod- eling in speech recognition: The shared views of four research groups. IEEE Signal Processing Magazine, 29 (6):82â97, 2012.
Hinton, Geoffrey E and Salakhutdinov, Ruslan R. Reduc- ing the dimensionality of data with neural networks. sci- ence, 313(5786):504â507, 2006.
Hu, Minqing and Liu, Bing. Mining and summarizing In Proceedings of the tenth ACM customer reviews. SIGKDD international conference on Knowledge dis- covery and data mining, pp. 168â177. ACM, 2004.
Hu, Zhiting, Yang, Zichao, Liang, Xiaodan, Salakhutdinov, Ruslan, and Xing, Eric P. Controllable text generation. arXiv preprint arXiv:1703.00955, 2017. | 1704.01444#33 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 34 | Huang, Fu Jie, Boureau, Y-Lan, LeCun, Yann, et al. Un- supervised learning of invariant feature hierarchies with applications to object recognition. In Computer Vision and Pattern Recognition, 2007. CVPRâ07. IEEE Confer- ence on, pp. 1â8. IEEE, 2007.
Generating Reviews and Discovering Sentiment
Hutter, Marcus. The human knowledge compression con- test. 2006. URL http://prize. hutter1. net, 2006.
Jozefowicz, Rafal, Vinyals, Oriol, Schuster, Mike, Shazeer, Noam, and Wu, Yonghui. Exploring the limits of language modeling. arXiv preprint arXiv:1602.02410, 2016.
Madnani, Nitin, Tetreault, Joel, and Chodorow, Martin. Re- examining machine translation metrics for paraphrase identiï¬cation. In Proceedings of the 2012 Conference of the North American Chapter of the Association for Com- putational Linguistics: Human Language Technologies, pp. 182â190. Association for Computational Linguistics, 2012.
Karpathy, Andrej, Johnson, Justin, and Fei-Fei, Li. Vi- sualizing and understanding recurrent networks. arXiv preprint arXiv:1506.02078, 2015. | 1704.01444#34 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 35 | Kim, Yoon. Convolutional neural networks for sentence classiï¬cation. arXiv preprint arXiv:1408.5882, 2014.
Kingma, Diederik and Ba, Jimmy. method for stochastic optimization. arXiv:1412.6980, 2014. A arXiv preprint Adam:
Kiros, Ryan, Zhu, Yukun, Salakhutdinov, Ruslan R, Zemel, Richard, Urtasun, Raquel, Torralba, Antonio, and Fidler, Sanja. Skip-thought vectors. In Advances in neural in- formation processing systems, pp. 3294â3302, 2015.
Krause, Ben, Lu, Liang, Murray, Iain, and Renals, Steve. arXiv Multiplicative lstm for sequence modelling. preprint arXiv:1609.07959, 2016.
Marcus, Mitchell P, Marcinkiewicz, Mary Ann, and San- torini, Beatrice. Building a large annotated corpus of english: The penn treebank. Computational linguistics, 19(2):313â330, 1993. | 1704.01444#35 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 36 | Marelli, Marco, Bentivogli, Luisa, Baroni, Marco, Bernardi, Raffaella, Menini, Stefano, and Zamparelli, Roberto. Semeval-2014 task 1: Evaluation of com- positional distributional semantic models on full sen- tences through semantic relatedness and textual entail- ment. SemEval-2014, 2014.
McAuley, Julian, Pandey, Rahul, and Leskovec, Jure. Infer- ring networks of substitutable and complementary prod- ucts. In Proceedings of the 21th ACM SIGKDD Inter- national Conference on Knowledge Discovery and Data Mining, pp. 785â794. ACM, 2015.
Krizhevsky, Alex, Sutskever, Ilya, and Hinton, Geoffrey E. Imagenet classiï¬cation with deep convolutional neural networks. In Advances in neural information processing systems, pp. 1097â1105, 2012.
Mesnil, Gr´egoire, Mikolov, MarcâAurelio, and Bengio, Yoshua. generative and discriminative techniques timent analysis of movie reviews. arXiv:1412.5335, 2014. Tomas, Ranzato, Ensemble of sen- for arXiv preprint | 1704.01444#36 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 37 | Kumar, Ankit, Irsoy, Ozan, Su, Jonathan, Bradbury, James, English, Robert, Pierce, Brian, Ondruska, Peter, Gulra- jani, Ishaan, and Socher, Richard. Ask me anything: Dy- namic memory networks for natural language process- ing. CoRR, abs/1506.07285, 2015.
Mikolov, Tomas, Yih, Wen-tau, and Zweig, Geoffrey. Lin- guistic regularities in continuous space word representa- tions. 2013.
Le, Quoc V. Building high-level features using large scale unsupervised learning. In Acoustics, Speech and Signal Processing (ICASSP), 2013 IEEE International Confer- ence on, pp. 8595â8598. IEEE, 2013.
Miyato, Takeru, Dai, Andrew M, and Goodfellow, Ian. Ad- versarial training methods for semi-supervised text clas- siï¬cation. arXiv preprint arXiv:1605.07725, 2016.
Munkhdalai, Tsendsuren and Yu, Hong. Neural semantic encoders. arXiv preprint arXiv:1607.04315, 2016. | 1704.01444#37 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 38 | Munkhdalai, Tsendsuren and Yu, Hong. Neural semantic encoders. arXiv preprint arXiv:1607.04315, 2016.
Li, Jiwei, Monroe, Will, and Jurafsky, Dan. Understanding neural networks through representation erasure. arXiv preprint arXiv:1612.08220, 2016.
Looks, Moshe, Herreshoff, Marcello, Hutchins, DeLesley, and Norvig, Peter. Deep learning with dynamic compu- tation graphs. arXiv preprint arXiv:1702.02181, 2017.
Maas, Andrew L, Daly, Raymond E, Pham, Peter T, Huang, Dan, Ng, Andrew Y, and Potts, Christopher. Learning word vectors for sentiment analysis. In Proceedings of the 49th Annual Meeting of the Association for Com- putational Linguistics: Human Language Technologies- Volume 1, pp. 142â150. Association for Computational Linguistics, 2011.
Ng, Andrew Y. Feature selection, l 1 vs. l 2 regularization, and rotational invariance. In Proceedings of the twenty- ï¬rst international conference on Machine learning, pp. 78. ACM, 2004. | 1704.01444#38 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 39 | Olshausen, Bruno A and Field, David J. Sparse coding with an overcomplete basis set: A strategy employed by v1? Vision research, 37(23):3311â3325, 1997.
Oquab, Maxime, Bottou, Leon, Laptev, Ivan, and Sivic, Josef. Learning and transferring mid-level image repre- sentations using convolutional neural networks. In Pro- ceedings of the IEEE conference on computer vision and pattern recognition, pp. 1717â1724, 2014.
Generating Reviews and Discovering Sentiment
Pang, Bo and Lee, Lillian. A sentimental education: Senti- ment analysis using subjectivity summarization based on minimum cuts. In Proceedings of the 42nd annual meet- ing on Association for Computational Linguistics, pp. 271. Association for Computational Linguistics, 2004.
the gap between human and machine translation. arXiv preprint arXiv:1609.08144, 2016.
Yergeau, Francois. Utf-8, a transformation format of iso 10646. 2003.
Pang, Bo and Lee, Lillian. Seeing stars: Exploiting class relationships for sentiment categorization with respect to rating scales. In Proceedings of the 43rd annual meeting on association for computational linguistics, pp. 115â 124. Association for Computational Linguistics, 2005. | 1704.01444#39 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 40 | Jeffrey, Socher, Richard, and Manning, Christopher D. Glove: Global vectors for word repre- sentation. In EMNLP, volume 14, pp. 1532â1543, 2014.
Salimans, Tim and Kingma, Diederik P. Weight normaliza- tion: A simple reparameterization to accelerate training of deep neural networks. In Advances in Neural Infor- mation Processing Systems, pp. 901â901, 2016.
Socher, Richard, Perelygin, Alex, Wu, Jean Y, Chuang, Jason, Manning, Christopher D, Ng, Andrew Y, Potts, Christopher, et al. Recursive deep models for seman- tic compositionality over a sentiment treebank. Citeseer, 2013.
Zeiler, Matthew D and Fergus, Rob. Visualizing and under- In European confer- standing convolutional networks. ence on computer vision, pp. 818â833. Springer, 2014.
Zhang, Xiang, Zhao, Junbo, and LeCun, Yann. Character- level convolutional networks for text classiï¬cation. In Advances in neural information processing systems, pp. 649â657, 2015. | 1704.01444#40 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 41 | Zhao, Han, Lu, Zhengdong, and Poupart, Pascal. Self- adaptive hierarchical sentence model. arXiv preprint arXiv:1504.05070, 2015.
Zhou, Bolei, Khosla, Aditya, Lapedriza, Agata, Oliva, Aude, and Torralba, Antonio. Object detectors emerge in deep scene cnns. arXiv preprint arXiv:1412.6856, 2014.
Tai, Kai Sheng, Socher, Richard, and Manning, Christo- Improved semantic representations from tree- arXiv pher D. structured long short-term memory networks. preprint arXiv:1503.00075, 2015.
Vincent, Pascal, Larochelle, Hugo, Bengio, Yoshua, and Manzagol, Pierre-Antoine. Extracting and composing robust features with denoising autoencoders. In Proceed- ings of the 25th international conference on Machine learning, pp. 1096â1103. ACM, 2008. | 1704.01444#41 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.01444 | 42 | Wang, Sida and Manning, Christopher D. Baselines and bigrams: Simple, good sentiment and topic classiï¬ca- In Proceedings of the 50th Annual Meeting of tion. the Association for Computational Linguistics: Short Papers-Volume 2, pp. 90â94. Association for Computa- tional Linguistics, 2012.
Wiebe, Janyce, Wilson, Theresa, and Cardie, Claire. An- notating expressions of opinions and emotions in lan- guage. Language resources and evaluation, 39(2):165â 210, 2005.
Wieting, John, Bansal, Mohit, Gimpel, Kevin, and Livescu, Karen. Towards universal paraphrastic sentence embed- dings. arXiv preprint arXiv:1511.08198, 2015.
Wu, Yonghui, Schuster, Mike, Chen, Zhifeng, Le, Quoc V, Norouzi, Mohammad, Macherey, Wolfgang, Krikun, Maxim, Cao, Yuan, Gao, Qin, Macherey, Klaus, et al. Googleâs neural machine translation system: Bridging | 1704.01444#42 | Learning to Generate Reviews and Discovering Sentiment | We explore the properties of byte-level recurrent language models. When given
sufficient amounts of capacity, training data, and compute time, the
representations learned by these models include disentangled features
corresponding to high-level concepts. Specifically, we find a single unit which
performs sentiment analysis. These representations, learned in an unsupervised
manner, achieve state of the art on the binary subset of the Stanford Sentiment
Treebank. They are also very data efficient. When using only a handful of
labeled examples, our approach matches the performance of strong baselines
trained on full datasets. We also demonstrate the sentiment unit has a direct
influence on the generative process of the model. Simply fixing its value to be
positive or negative generates samples with the corresponding positive or
negative sentiment. | http://arxiv.org/pdf/1704.01444 | Alec Radford, Rafal Jozefowicz, Ilya Sutskever | cs.LG, cs.CL, cs.NE | null | null | cs.LG | 20170405 | 20170406 | [
{
"id": "1612.08220"
},
{
"id": "1702.02181"
},
{
"id": "1602.02410"
},
{
"id": "1506.02078"
},
{
"id": "1609.07959"
},
{
"id": "1703.00955"
},
{
"id": "1609.08144"
},
{
"id": "1611.01702"
},
{
"id": "1504.05070"
},
{
"id": "1607.04315"
},
{
"id": "1511.08198"
},
{
"id": "1607.06450"
},
{
"id": "1605.07725"
},
{
"id": "1503.00075"
},
{
"id": "1602.03483"
}
] |
1704.00648 | 0 | 7 1 0 2 n u J 8 ] G L . s c [
2 v 8 4 6 0 0 . 4 0 7 1 : v i X r a
# Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations
Eirikur Agustsson ETH Zurich [email protected]
Fabian Mentzer ETH Zurich [email protected]
# Michael Tschannen ETH Zurich [email protected]
Lukas Cavigelli ETH Zurich [email protected]
Radu Timofte ETH Zurich [email protected]
# Luc Van Gool KU Leuven ETH Zurich [email protected]
Luca Benini ETH Zurich [email protected]
# Abstract | 1704.00648#0 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 0 | 8 1 0 2
g u A 1 2 ] C O . h t a m [ 4 v 5 0 8 0 0 . 4 0 7 1 : v i X r a
# On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning
Bolin Gao and Lacra Pavel
AbstractâIn this paper, we utilize results from convex analysis and monotone operator theory to derive additional properties of the softmax function that have not yet been covered in the existing literature. In particular, we show that the softmax function is the monotone gradient map of the log-sum-exp function. By exploiting this connection, we show that the inverse temper- ature parameter λ determines the Lipschitz and co-coercivity properties of the softmax function. We then demonstrate the usefulness of these properties through an application in game- theoretic reinforcement learning.
# I. INTRODUCTION | 1704.00805#0 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 1 | Luca Benini ETH Zurich [email protected]
# Abstract
We present a new approach to learn compressible representations in deep archi- tectures with an end-to-end training strategy. Our method is based on a soft (continuous) relaxation of quantization and entropy, which we anneal to their discrete counterparts throughout training. We showcase this method for two chal- lenging applications: Image compression and neural network compression. While these tasks have typically been approached with different methods, our soft-to-hard quantization approach gives results competitive with the state-of-the-art for both.
# Introduction | 1704.00648#1 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 1 | The softmax function is one of the most well-known func- tions in science and engineering and has enjoyed widespread usage in fields such as game theory [1], [2], [3], reinforcement learning [4] and machine learning [5], [6]. From a game theory and reinforcement learning perspective, the softmax function maps the raw payoff or the score (or Q-value) associated with a payoff to a mixed strategy [1], [2], [4], whereas from the perspective of multi-class logistic regression, the softmax function maps a vector of logits (or feature variables) to a posterior probability distribution [5], [6]. The broader engineering applications involving the softmax function are numerous; interesting examples can be found in the fields of VLSI and neuromorphic computing, see [35], [36], [37], [39]. The term âsoftmaxâ is a portmanteau of âsoftâ and âargmaxâ [5]. The function first appeared in the work of Luce [12], although its coinage is mostly credited to Bridle [13]. Depending on the context in which the softmax function appears, it also goes by the name of Boltzmann distribution [1], | 1704.00805#1 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 2 | # Introduction
In recent years, deep neural networks (DNNs) have led to many breakthrough results in machine learning and computer vision [20, 28, 9], and are now widely deployed in industry. Modern DNN models often have millions or tens of millions of parameters, leading to highly redundant structures, both in the intermediate feature representations they generate and in the model itself. Although overparametrization of DNN models can have a favorable effect on training, in practice it is often desirable to compress DNN models for inference, e.g., when deploying them on mobile or embedded devices with limited memory. The ability to learn compressible feature representations, on the other hand, has a large potential for the development of (data-adaptive) compression algorithms for various data types such as images, audio, video, and text, for all of which various DNN architectures are now available. | 1704.00648#2 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 2 | mostly credited to Bridle [13]. Depending on the context in which the softmax function appears, it also goes by the name of Boltzmann distribution [1], [4], [34], Gibbs map [22], [46], logit map, logit choice tule, logit response function [1], [2], [3], [19], [14], [23], [57] or (smooth) perturbed best response function [44], [56]. The reader should take care in distinguishing the softmax function used in this paper from the log-sum-exp function, which is often also referred to as the âsoftmaxâ (since the log-sum-exp is a soft approximation of the vector-max function [7], [24]). There are many factors contributing to the wide-spread usage of the softmax function. In the context of reinforcement learning, the softmax function ensures a trade-off between ex- ploitation and exploration, in that every strategy in an agentâs possession has a chance of being explored. Unlike some other choice mechanisms such as e-greedy [4], the usage of | 1704.00805#2 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 3 | DNN model compression and lossy image compression using DNNs have both independently attracted a lot of attention lately. In order to compress a set of continuous model parameters or features, we need to approximate each parameter or feature by one representative from a set of quantization levels (or vectors, in the multi-dimensional case), each associated with a symbol, and then store the assignments (symbols) of the parameters or features, as well as the quantization levels. Representing each parameter of a DNN model or each feature in a feature representation by the corresponding quantization level will come at the cost of a distortion D, i.e., a loss in performance (e.g., in classiï¬cation accuracy for a classiï¬cation DNN with quantized model parameters, or in reconstruction error in the context of autoencoders with quantized intermediate feature representations). The rate R, i.e., the entropy of the symbol stream, determines the cost of encoding the model or features in a bitstream. | 1704.00648#3 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 3 | softmax selection rule1 is favorably supported by experimental literature in game theory and reinforcement learning as a plausible model for modeling real-life decision-making. For instance, in [20], the authors noted that the behavior of mon- keys during reinforcement learning experiments is consistent with the softmax selection rule. Furthermore, the input-output behavior of the softmax function has been compared to lateral inhibition in biological neural networks [5]. For additional discussions on the connections between softmax selection rule and the neurophysiology of decision-making, see [30], [31], [32], [33]. From the perspective of game theory, the softmax function characterizes the so-called âlogit equilibriumâ, which accounts for incomplete information and random perturbation of the payoff during gameplay and has been noted for having better versatility in describing the outcomes of gameplay as compared to the Nash equilibrium [3], [14].
learning softmax rule strategy
Fig. 1: High-level representation of a game-theoretic multi-agent reinforcement learning scheme with the softmax selection rule. In this learning scenario, the players each choose some strategy, play the game and receive real-valued payoffs. The players then use some learning rule to independently convert the payoffs into scores. Finally, each player uses the softmax to select the next strategy. | 1704.00805#3 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 4 | To learn a compressible DNN model or feature representation we need to minimize D + βR, where β > 0 controls the rate-distortion trade-off. Including the entropy into the learning cost function can be seen as adding a regularizer that promotes a compressible representation of the network or feature representation. However, two major challenges arise when minimizing D + βR for DNNs: i) coping with the non-differentiability (due to quantization operations) of the cost function D + βR, and ii) obtaining an accurate and differentiable estimate of the entropy (i.e., R). To tackle i), various methods have been proposed. Among the most popular ones are stochastic approximations [39, 19, 6, 32, 4] and rounding with a smooth derivative approximation [15, 30]. To address ii) a common approach is to assume the symbol stream to be i.i.d. and to model the marginal symbol distribution with a parametric model, such as a Gaussian mixture model [30, 34], a piecewise linear model [4], or a Bernoulli distribution [33] (in the case of binary symbols). | 1704.00648#4 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 4 | Despite the intuitions that researchers have acquired with respect to the usage of the softmax function, it is apparent that the understanding of its mathematical properties is still lacking. For instance, in the analysis of stateless multi-agent reinforcement learning schemes (Figure 1), when the action selection rule is taken as the softmax function, is of interest which, if any, properties of softmax can allow us to conclude convergence of the learning algorithm towards a solution of the game (e.g., a Nash or logit equilibrium). Although the desired properties that can be used to conclude such convergence are fairly mundane, virtually no reference to these properties can be found within the existing body of literature. With regard to applications in the context of
B. Gao and L. Pavel are with the Department of Electrical and Computer Engineering, University of Toronto, Toronto, ON, M5S [email protected], 3G4, [email protected]
1In this paper, we refer to the softmax function interchangeably as the softmax operator, softmax map, softmax choice, softmax selection rule, or simply, the softmax.
1 | 1704.00805#4 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 5 | In this paper, we propose a uniï¬ed end-to-end learning frame- work for learning compressible representations, jointly op- timizing the model parameters, the quantization levels, and the entropy of the resulting symbol stream to compress ei- ther a subset of feature representations in the network or the model itself (see inset ï¬gure). We address both challenges i) and ii) above with methods that are novel in the context DNN model and feature compression. Our main contributions are:
DNN model compression
x x F1( Fb · ; w1) x(1) x(Kâ1) FK( · z = [w1, w2, . . . , wK] ⦠... ⦠F1 data compression z = x(b) FK ⦠... ; wK) ⦠Fb+1 z: vector to be compressed x(K) x(K)
We provide the ï¬rst uniï¬ed view on end-to-end learned compression of feature representations and DNN models. These two problems have been studied largely independently in the literature so far. | 1704.00648#5 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 5 | 1
reinforcement and machine learning, the adjustment of the temperature constant of the softmax function is still performed on a rule-of-thumb basis. It has also been brieï¬y speculated in [42] that proper adjustment of the temperature constant can be used for game-theoretic reinforcement learning algorithms to achieve higher expected payoff. Therefore, an adaptive mechanism for scheduling the temperature constant would be desirable for many applications. Clearly, these questions can only be afï¬rmatively answered by uncovering new properties of the softmax function. | 1704.00805#5 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 6 | Our method is simple and intuitively appealing, relying on soft assignments of a given scalar or vector to be quantized to quantization levels. A parameter controls the âhardnessâ of the assignments and allows to gradually transition from soft to hard assignments during training. In contrast to rounding-based or stochastic quantization schemes, our coding scheme is directly differentiable, thus trainable end-to-end.
Our method does not force the network to adapt to speciï¬c (given) quantization outputs (e.g., integers) but learns the quantization levels jointly with the weights, enabling application to a wider set of problems. In particular, we explore vector quantization for the ï¬rst time in the context of learned compression and demonstrate its beneï¬ts over scalar quantization.
Unlike essentially all previous works, we make no assumption on the marginal distribution of the features or model parameters to be quantized by relying on a histogram of the assignment probabilities rather than the parametric models commonly used in the literature. | 1704.00648#6 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 6 | The goal of this paper is to expand on the known mathe- matical properties of the softmax function and demonstrate how they can be utilized to conclude the convergence of learning algorithm in a simple application of game-theoretic reinforcement learning. For additional examples and more involved applications, see our related paper [21]. We perform our analysis and derive new properties by using tools from convex analysis [7], [24] and monotone operator theory [25], [26]. It has been known that stateless multi-agent reinforce- ment learning that utilizes the softmax selection rule has close connections with the ï¬eld of evolutionary game theory [9], [10], [22], [23], [54], [20], [58]. Therefore, throughout this paper, we motivate some of the results through insights from the ï¬eld of evolutionary game theory [15], [16], [17]. It is our hope that researchers across various disciplines can apply our results presented here to their domain-speciï¬c problems. | 1704.00805#6 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 7 | We apply our method to DNN model compression for a 32-layer ResNet model [13] and full- resolution image compression using a variant of the compressive autoencoder proposed recently in [30]. In both cases, we obtain performance competitive with the state-of-the-art, while making fewer model assumptions and signiï¬cantly simplifying the training procedure compared to the original works [30, 5].
The remainder of the paper is organized as follows. Section 2 reviews related work, before our soft-to-hard vector quantization method is introduced in Section 3. Then we apply it to a compres- sive autoencoder for image compression and to ResNet for DNN compression in Section 4 and 5, respectively. Section 6 concludes the paper.
# 2 Related Work
There has been a surge of interest in DNN models for full-resolution image compression, most notably [32, 33, 3, 4, 30], all of which outperform JPEG [35] and some even JPEG 2000 [29] The pioneering work [32, 33] showed that progressive image compression can be learned with convolutional recurrent neural networks (RNNs), employing a stochastic quantization method during training. [3, 30] both rely on convolutional autoencoder architectures. These works are discussed in more detail in Section 4. | 1704.00648#7 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 7 | The organization of this paper is as follows. Section II intro- duces notation convention for the rest of the paper. Section III introduces the deï¬nition of the softmax function, its different representations as well as a brief survey of several of its known properties from the existing literature. Section IV provides the background to convex optimization and monotone operator theory. In Section V, we derive additional properties of the softmax function. Section VI provides an analysis of a stateless continuous-time score-based reinforcement learning scheme within a single-player game setup to illustrate the application of these properties. Section VII provides the conclusion and some open problems for further investigation.
# II. NOTATIONS
The notations used in this paper are as follows: ¢ The p-norm of a vector is denoted as || - ||), 1 « The n â 1 dimensional unit simplex is denoted where, APâ! := {x ⬠R"|||x||, = 1,2; > O}. The (relative) interior of A"! is denoted
. ⤠â
< p<
⤠| 1704.00805#7 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 8 | In the context of DNN model compression, the line of works [12, 11, 5] adopts a multi-step procedure in which the weights of a pretrained DNN are ï¬rst pruned and the remaining parameters are quantized using a k-means like algorithm, the DNN is then retrained, and ï¬nally the quantized DNN model is encoded using entropy coding. A notable different approach is taken by [34], where the DNN
2
compression task is tackled using the minimum description length principle, which has a solid information-theoretic foundation.
It is worth noting that many recent works target quantization of the DNN model parameters and possibly the feature representation to speed up DNN evaluation on hardware with low-precision arithmetic, see, e.g., [15, 23, 38, 43]. However, most of these works do not speciï¬cally train the DNN such that the quantized parameters are compressible in an information-theoretic sense. | 1704.00648#8 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 8 | . ⤠â
< p<
â¤
« The n â 1 dimensional unit simplex is denoted by Aâ~1, where, APâ! := {x ⬠R"|||x||, = 1,2; > O}. « The (relative) interior of A"! is denoted by int(Aâ~!), where, int(A"~1) := {x ⬠R"|||x||1 = 1,2; > O}.
# x {
int(A"~1) := {x ⬠R"|||x||1 = 1,2; > O}. Râ denotes the i canonical basis of Râ, eg., e; =
# x {
. 1 = 1, xi > 0 }
â
e e; ⬠Râ denotes the i canonical basis of Râ, [0,...,1,...,0] T where 1 occupies the i position. + The vector of ones is denoted as 1 := [1,...,1] " vector of zeros is denoted as 0 := [0, eey 0] # and the
⢠Matrices are denoted using bold capital letters such as A. In general, a vector in the unconstrained space Rn will be denoted using z, while a vector in the n 1 dimensional unit simplex will be denoted using x. All logarithms are assumed to be base e. | 1704.00805#8 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 9 | Gradually moving from an easy (convex or differentiable) problem to the actual harder problem during optimization, as done in our soft-to-hard quantization framework, has been studied in various contexts and falls under the umbrella of continuation methods (see [2] for an overview). Formally related but motivated from a probabilistic perspective are deterministic annealing methods for maximum entropy clustering/vector quantization, see, e.g., [24, 42]. Arguably most related to our approach is [41], which also employs continuation for nearest neighbor assignments, but in the context of learning a supervised prototype classiï¬er. To the best of our knowledge, continuation methods have not been employed before in an end-to-end learning framework for neural network-based image compression or DNN compression.
# 3 Proposed Soft-to-Hard Vector Quantization
# 3.1 Problem Formulation | 1704.00648#9 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 9 | 2
# III. REVIEW OF THE SOFTMAX FUNCTION AND ITS KNOWN PROPERTIES
While the softmax function may take on different appear- ances depending on the application, its base model is that of a vector-valued function, whose individual component consists of an exponential evaluated at an element of a vector, which is normalized by the summation of the exponential of all the elements of that vector. In this section, we present several well- known and equivalent representations of the softmax function, and review some of its properties that are either immediate based on its deï¬nition or have been covered in the existing literature.
A. Representations of the Softmax function
The most well-known and widely-accepted version of the softmax function is as follows [5], [37], [40], [41], [43], [59].
Deï¬nition 1. The softmax function is given by Ï : Rn int(ânâ1),
exp(A2z1) o(z) = a : A>, (1) Xo exp(Az;) exp(AzZn) j=l
where λ is referred to as the inverse temperature constant.
Remark 1. The softmax function is commonly presented in the literature as the individual components of (1), | 1704.00805#9 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 10 | # 3 Proposed Soft-to-Hard Vector Quantization
# 3.1 Problem Formulation
Preliminaries and Notations. We consider the standard model for DNNs, where we have an architecture F : Rd1 RdK+1 composed of K layers F = FK F1, where layer Fi maps Rdi , wK] as the â parameters of the network and we denote the intermediate layer outputs of the network as x(0) := x(i) := Fi(x(iâ1)), such that F (x) = x(K) and x(i) is the feature vector produced x by layer Fi.
x1, = { , ( X The parameters of the network are learned w.r.t. training data X RdK+1, by minimizing a real-valued loss y1, { , yN = Y be decomposed as a sum over the training data plus a regularization term, · · · } â L Y Rd1 and labels · · · ; F ). Typically, the loss can , xN } â
1x LIND F) = MFO), ¥e) + AR(W), () Na | 1704.00648#10 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 10 | where λ is referred to as the inverse temperature constant.
Remark 1. The softmax function is commonly presented in the literature as the individual components of (1),
oi(2) = exp(Az;) a sis SY exp(Az;) j=l i<n. (2)
When A = 1, we refer to (1) as the standard softmax function. As X â> 0, the output of o converges point- wise to the center of the simplex, i.e., a uniform probability distribution. On the other hand, as A â oo, the output of a converges point-wise to e; ⬠R", where j = argmax e; z, 1<i<n provided that the difference between two or more components of z is not too small [23], [37]. We note that elsewhere in the literature, the reciprocal of A is also commonly used. Remark 2. In R?, (2) reduces to the logistic function in terms of 2 â 25, exp(Az;) 1 oi(2) IF
exp(λzi) exp(λzi) + exp(λzj) 1 λ(zi â = 1 + exp( zj)) â , j = i.
(3) Furthermore, we note that (2) can be equivalently represented as, | 1704.00805#10 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 11 | 1x LIND F) = MFO), ¥e) + AR(W), () Na
where ¢( F(x), y) is the sample loss, \ > 0 sets the regularization strength, and R(W) is a regularizer (e.g., R(W) = >; ||w:l|? for ly regularization). In this case, the parameters of the network can be learned using stochastic gradient descent over mini-batches. Assuming that the data 1â, Y on which the network is trained is drawn from some distribution Px y, the loss can be thought of as an estimator of the expected loss E[¢(F(X), Y) + \R(W)]. In the context of image classification, R® would correspond to the input image space and R¢*+# to the classification probabilities, and @ would be the categorical cross entropy. | 1704.00648#11 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 11 | (3) Furthermore, we note that (2) can be equivalently represented as,
oi(z) = exp(Az; â log(j 1 exp(Az;)))- (4)
# Ïi(z) = exp(λzi â
While (4) is seldom used as a representation of the softmax function, the author noted that (4) represents an exponential family, which is the solution of the replicator dynamics of evolutionary game theory [2], [16], [17]. We will expand on the connections between the replicator dynamics and the softmax function in section V.
log-sum-exp negative entropy softmax (1st component) softmax (2nd component) tf Y tttjj;};}3}/4 \ \ \ WH â LA;
Fig. 2: Plots of the log-sum-exp, negative entropy and both components of softmax function over R2 with λ = 1. The red curve on the negative entropy plot is the restriction of the negative entropy over the 1-dimensional simplex, â1.
Another important representation of the softmax function can be derived by considering the âargmax functionâ under entropy regularization.â Let z ⬠R", and consider the argmax of «'z over the simplex,
M(z) = argmax a! z. weAn-} (5) | 1704.00805#11 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 12 | We say that the deep architecture is an autoencoder when the network maps back into the input space, with the goal of reproducing the input. In this case, dj = dy 4 and F(x) is trained to approximate x, e.g., with a mean squared error loss ((F(x), y) = ||F(x) â y||?. Autoencoders typically condense the dimensionality of the input into some smaller dimensionality inside the network, i.e., the layer with the smallest output dimension, x) © RY | has dy < d1, which we refer to as the âbottleneckâ. Compressible representations. We say that a weight parameter w; or a feature xâ) has a compress- ible representation if it can be serialized to a binary stream using few bits. For DNN compression, we want the entire network parameters W to be compressible. For image compression via an autoencoder, we just need the features in the bottleneck, xâ), to be compressible.
â
Suppose we want to compress a feature representation z autoencoder) given an input x. Assuming that the data X will be a sample from a continuous random variable Z. , Y Rd in our network (e.g., x(b) of an â is drawn from some distribution PX,Y, z | 1704.00648#12 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 12 | M(z) = argmax a! z. weAn-} (5)
over int(Aâ~!) [48], by strong concavity of the argument of (6), it can be shown that by invoking the Karush-Kuhn-Tucker (KKT) conditions, the unique maximizer of (6) is the softmax function evaluated at z ⬠Râ, ie., n argmax [a' zâ 7! > x; log(ax;)] = o(z). weAnr-t j=l (8)
â λâ1
When there is a unique largest element in the vector z, it is clear that M returns the basis vector corresponding to the entry of that element, that is, M(z) = ej, where j = argmax ez. This solution corresponds to a vertex of 1<i<n the simplex. In general, however, (5) is set-valued; to see this, simply consider the case where two or more components of z are equal.
For many learning related applications, it is highly desirable for M (z) to be singled-valued [22], [23], [41], [49], [50]. The most common approach to achieve this is by employing a so-called regularizer function Ï to (5), which yields the regularized argmax function:3 | 1704.00805#12 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 13 | To store z with a ï¬nite number of bits, we need to map it to a discrete space. Speciï¬cally, we map z to a sequence of m symbols using a (symbol) encoder E : Rd [L]m, where each symbol is an index ranging from 1 to L, i.e., [L] := . The reconstruction of z is then produced by a } Rd. Since z is (symbol) decoder D : [L]m
â
3
a sample from Z, the symbol stream E(z) is drawn from the discrete probability distribution PE(Z). Thus, given the encoder E, according to Shannonâs source coding theorem [7], the correct metric for compressibility is the entropy of E(Z):
H(E(Z)) = â eâ[L]m P (E(Z) = e) log(P (E(Z) = e)). (2)
Our generic goal is hence to optimize the rate distortion trade-off between the expected loss and the entropy of E(Z):
nin, Ex.v[(P),Y) + AR(W)] + BH(E(2)), @)
where ËF is the architecture where z has been replaced with Ëz, and β > 0 controls the trade-off between compressibility of z and the distortion it imposes on ËF . | 1704.00648#13 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 13 | M(z) = argmax [alz-w wear} ))- (6)
It has been noted in [20], [39], [51], [52] that the argument of the left-hand side of (8),
n a'zâX1Y a; log(x;), ja (9)
represents the so called âfree energyâ in statistical thermo- dynamics. In light of this connection, from a game-theoretic perspective, the softmax function can be thought of as pro- viding the mixed strategy with the maximum entropy which maximizes the payoff of a game [20].
It is also worth noting that the maximum of (9) over the simplex is by deï¬nition the Legendre-Fenchel transform of the negative entropy function [24, p. 102], also commonly referred to as the log-sum-exp function, which is given by lse : Rn
A common choice of the regularizer is the negative entropy function restricted to the simplex, which under the convention 0 log(0) = 0, is given by Ï : Rn
â
⪠{
â}
n Aâ? Y a; log(xzj),A>0 xe Art ja (7) +00 cg An}.
(7)
â
â} lse(z) := λâ1 log(
⪠{ | 1704.00805#13 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 14 | However, we cannot optimize (3) directly. First, we do not know the distribution of X and Y. Second, the distribution of Z depends in a complex manner on the network parameters W and the distribution of X. Third, the encoder E is a discrete mapping and thus not differentiable. For our ï¬rst approximation we consider the sample entropy instead of H(E(Z)). That is, given the data and X [L]m some ï¬xed network parameters W, we can estimate the probabilities P (E(Z) = e) for e â Lm. If z is the via a histogram. For this estimate to be accurate, we however would need bottleneck of an autoencoder, this would correspond to trying to learn a single histogram for the entire discretized data space. We relax this by assuming the entries of E(Z) are i.i.d. such that we can instead compute the histogram over the L distinct values. More precisely, we assume that for e = (e1, l=1 pel , where pj is the histogram estimate
â [N ], el(zi) = j
el(zi) | l [m], i pj := |{ , â â mN }| (4) | 1704.00648#14 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 14 | (7)
â
â} lse(z) := λâ1 log(
⪠{
n Ise(z) = d-Hog(3o exp(Az;)),A > 0. j=l (10)
When λ = 1, we refer to (10) as the standard log-sum-exp function.
It is well-known that the log-sum-exp is an approximation to the vector-max function [7, p. 72], [24, p. 27],
When λ = 1, we refer to (7) as the standard negative entropy function.
Since negative entropy is λâ1-strongly convex4 in
2As pointed out in [5, p. 182], the softmax function is a soft approximation of the argmax function, z ++ argmax a! z, not that of the âmaxâ function. 2eEAn-1
the regularizer is also referred to as an admissible deterministic perturbation [2, p. 189], penalty function [22], [23], smoothing function [44] or Bregman function [48]. For detailed construction of the regularizer, see [22], [23], [47]. | 1704.00805#14 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 15 | â [N ], el(zi) = j
el(zi) | l [m], i pj := |{ , â â mN }| (4)
where we denote the entries of E(z) = (e1(z), data point xi (3.1) into (2), , em(z)) and zi is the output feature z for training . We then obtain an estimate of the entropy of Z by substituting the approximation · · · â X
m m L H(E(Z))x- SO (i1».)} log (i1>. =âm > pj log pj = mH(p), (5) ee(L]⢠\I=1 I=1 j=l
where the ï¬rst (exact) equality is due to [7], Thm. 2.6.6, and H(p) := entropy for the (i.i.d., by assumption) components of E(Z) 1. â j=1 pj log pj is the sample
We now can simplify the ideal objective of @). by replacing the expected loss with the sample mean over ¢ and the entropy using the sample entropy H(p), obtaining N =>
N => e(F(x,). ys) + ARCW) + BmH(p). ) Na | 1704.00648#15 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 16 | vecmax(z) := max . z1, . . . , zn} { Rn, vecmax(z)
That is, for any z ⬠R"â, vecmax(z) < Ise(z) < vecmax(z) +7! log(n), which can be shown by considering n. exp(A vecmax(z)) < > exp(Az;) < nexp(A vecmax(z)). =1
Due to this reason, the log-sum-exp is sometimes referred to as the âsoftmax functionâ in optimization-oriented literature. We note that the dual or convex conjugate of the log- sum-exp function (10) is the negative entropy restricted to the simplex, given by (7) [7, p. 93][24, p. 482][52]. We illustrate the log-sum-exp function as well as the negative entropy and the softmax function in Figure 2. By Fenchel3
Young inequality, the log-sum-exp function is bounded below by a linear function,
Ise(z) > x! zâ W(x), V2 ⬠A"1,2 ERâ. (11)
â¥
â
â
â
â | 1704.00805#16 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 17 | In case z is composed of one or more parameter vectors, such as in DNN compression where z = W, z and Ëz cease to be random variables, since W is a parameter of the model. That is, opposed to the that produces another source ËZ which we want to be compressible, case where we have a source we want the discretization of a single parameter vector W to be compressible. This is analogous to compressing a single document, instead of learning a model that can compress a stream of documents. In this case, (3) is not the appropriate objective, but our simpliï¬ed objective in (6) remains appropriate. This is because a standard technique in compression is to build a statistical model of the (ï¬nite) data, which has a small sample entropy. The only difference is that now the histogram probabilities in (4) , i.e., N = 1 and zi = W in (4), and they count towards are taken over W instead of the dataset storage as well as the encoder E and decoder D.
1In fact, from [7], Thm. 2.6.6, it follows that if the histogram estimates pj are exact, (5) is an upper bound for the true H(E(Z)) (i.e., without the i.i.d. assumption).
4 | 1704.00648#17 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 17 | Ise(z) > x! zâ W(x), V2 ⬠A"1,2 ERâ. (11)
â¥
â
â
â
â
Further consequences of the duality between the negative entropy and the log-sum-exp function as well as its role in this time. Interested game theory will not be explored at readers may refer to [38], [52] or any standard textbooks on convex analysis, for example, [7], [24], [28].
Finally, we provide a probabilistic characterization of the softmax function. Let â¬;,i ⬠{1,...,n} be independent and identically distributed random variables with a Gumbel distribution given by,
Pre < ¢] = exp(âexp(âAe â 9), (12) 0.57721 is the Euler-Mascheroni constant. It can
â
â
â
where γ be shown that for any vector z â Rn [2, p. 194][19],
â
Pr (13) i = argmax z; + | =0;(z). 1<j<n
In game theory terms, (13) represents the probability of choosing the pure strategy that maximizes the payoff or score Rn, after the payoff or score has been perturbed by a z stochastic perturbation.
B. Properties of the Softmax - State of the Art | 1704.00805#17 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 18 | 4
Challenges. Eq. (6) gives us a uniï¬ed objective that can well describe the trade-off between com- pressible representations in a deep architecture and the original training objective of the architecture.
However, the problem of ï¬nding a good encoder E, a corresponding decoder D, and parameters W that minimize the objective remains. First, we need to impose a form for the encoder and decoder, and second we need an approach that can optimize (6) w.r.t. the parameters W. Independently of the choice of E, (6) is challenging since E is a mapping to a ï¬nite set and, therefore, not differentiable. This implies that neither H(p) is differentiable nor ËF is differentiable w.r.t. the parameters of z and layers that feed into z. For example, if ËF is an autoencoder and z = x(b), the output of the network will not be differentiable w.r.t. w1,
· ·
· ·
These challenges motivate the design decisions of our soft-to-hard annealing approach, described in the next section.
# 3.2 Our Method | 1704.00648#18 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 18 | B. Properties of the Softmax - State of the Art
We briefly comment on some properties of the softmax function that are either immediate or have been covered in the existing literature. First, 0 maps the origin of Râ to the barycenter of A"~1, that is, 7(0) = n~11. The softmax func- tion o is surjective but not injective, as it can easily be shown that for any z, z+c1 ⬠Râ, Vc ⬠R, we have o(z+c1) = o(z). By definition, ||o(z)||) = o(z)'1=1,Vz eR".
In a recent paper, the authors of [43] noted that Ï(P(z)) = PÏ(z), where P is any permutation matrix, and that the standard softmax function satisï¬es a type of âcoordinate non- Rn, and expansivenessâ property, whereby given a vector z â 1 zi, then 0 suppose that zj ⥠zi). (zj â 2 ⤠The last property can be derived by exploiting the properties of the hyperbolic tangent function. It was also noted that these properties of the softmax function bear similarities with the Euclidean projection onto ânâ1 [43]. | 1704.00805#18 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 19 | · ·
· ·
These challenges motivate the design decisions of our soft-to-hard annealing approach, described in the next section.
# 3.2 Our Method
Encoder and decoder form. For the encoder E : Rd vectors = into a matrix Z = [¯z(1), nearest neighbor in points in Rd/m, which we partition into the Voronoi tessellation over the centers Rd then simply constructs ËZ D : [L]m â picking the corresponding centers ËZ = [ce1 , · · · into Rd. We will interchangeably write Ëz = D(E(z)) and ËZ = D(E(Z)). The idea is then to relax E and D into continuous mappings via soft assignments instead of the hard nearest neighbor assignment of E.
R?/â to ||z â
Soft assignments. We deï¬ne the soft assignment of ¯z
as 2])
softmax(âo[||z â e1||?,..., ||z â ex ||7]) eRâ, (7) an at is the standard softmax operator, such that ¢(Z) has
e1||?,...,
(2) :=
, yL)j := where softmax(y1, positive entries and
||4(Z)||; | 1704.00648#19 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 19 | In a direction that is tangential to the aim of this paper, the authors of [40] is interested in finding a bound on the softmax function. It can be shown that, n exp(Azi) 1 =
n exp(Azi) 1 oi(z) = 2 » (4) a Lh TFexp(-ney â 2% Bends) ee erOG aM
where (14) is referred as âone-vs-eachâ bound, which can be generalized to bounds on arbitrary probabilities [40]. From (3), we see that this inequality is tight for n = 2.
4
IV. REVIEW OF CONVEX OPTIMIZATION AND MONOTONE OPERATOR THEORY
In this section we review some of the definitions and results from convex optimization and monotone operator theory that will be used in the derivation of new properties of the softmax function. Since the following definitions are standard, readers who are familiar with these subjects can skip this section without any loss of continuity. Most of the proofs of the propositions in this section can be found in references such as [7], [24], [25], [26], [27], [28]. Throughout this section, we assume that Râ is equipped with the standard n inner product (z,2â) == >> 22; with the induced 2-norm i=l | 1704.00805#19 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 20 | e1||?,...,
(2) :=
, yL)j := where softmax(y1, positive entries and
||4(Z)||;
= 1. We denote the j-th entry of (Z) with @;(Z) and note that
oe 1 ifj = arg min,,.;,)||Z â c;|| l ,(Z) = Je (L) J ooo (2) {i otherwise
such that ËÏ(¯z) := limÏââ Ï(¯z) converges to a one-hot encoding of the nearest center to ¯z in therefore refer to ËÏ(¯z) as the hard assignment of ¯z to the soft assignment Ï(¯z).
Using soft assignment, we deï¬ne the soft quantization of ¯z as
L Q@) = )> ¢j4i(@) = C4), j=l
where we write the centers as a matrix C = [c1, assignment is taken with ËQ(¯z) := limÏââ ËQ(¯z) = ce(¯z), where e(¯z) is the center in Therefore, we can now write: | 1704.00648#20 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 20 | (z,2â) /(z,z). C?
lzll2 == /(z,z). We assume the domain of f, dom f, is convex. C1, C? denote the class of continuously-differentiable and twice continuously-differentiable functions, respectively.
Rn R is convex if, Deï¬nition 2. A function f : dom f
â
F(â),
f(0z + (1â8)2") < Of (2) + (1-9) F(â), (15)
# â dom f and θ
â¤
â
for all z, zâ ⬠dom f and @ ⬠[0,1] and strictly convex if (15) holds strictly whenever z 4 zâ and 6 ⬠(0,1).
â
The convexity of a C 2 function f is easily determined through its Hessian 2f .
â
Lemma 1. Let f be C 2. Then f is convex if and only if dom f is convex and its Hessian is positive semideï¬nite, that is, for all z
â
â
# ol
ol V? f(z)u > 0, (16)
â 2f (z) is positive deï¬nite for all z
â¥
and strictly convex if dom f . â â | 1704.00805#20 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 21 | ËZ = D(E(Z)) = [ ËQ(¯z(1)), , ËQ(¯z(m))] = C[ ËÏ(¯z(1)), , ËÏ(¯z(m))].
· ·
· ·
Now, instead of computing ËZ via hard nearest neighbor assignments, we can approximate it with a smooth relaxation ËZ := C[Ï(¯z(1)), , Ï(¯z(m))] by using the soft assignments instead of the hard assignments. Denoting the corresponding vector form by Ëz, this gives us a differentiable approximation ËF of the quantized architecture ËF , by replacing Ëz in the network with Ëz. Entropy estimation. Using the soft assignments, we can similarly deï¬ne a soft histogram, by summing up the partial assignments to each center instead of counting as in (4):
m N 9 = oil), i=1 [=1
5
This gives us a valid probability mass function q = (q1, to p = (p1, , qL), which is differentiable but converges · · · , pL) as Ï
. â â
· ·
We can now deï¬ne the âsoft entropyâ as the cross entropy between p and q: | 1704.00648#21 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 21 | â 2f (z) is positive deï¬nite for all z
â¥
and strictly convex if dom f . â â
Next, we introduce the concept of a monotone operator and its related properties. A monotone operator is usually taken as a set-valued relation, however, it is also natural for the deï¬nitions related to a monotone operator to be directly applied to single-valued maps [26].
Deï¬nition 3. ([26, p. 154]) An operator (or mapping) F : Rn
Rn is said to be: if,
D â ⢠pseudo monotone on
â
D 0 =
F(zâ)"(2-2)>0 = F(z)"(2-2) >0,Vz,2' â¬D. (17)
⢠pseudo monotone plus on and, D if it is pseudo monotone on D
F(z')"(zâ2') > O and F(z)'(zâ2/) =0 18 => F(z) = F(zâ),Vz,2' ⬠D. (18)
# â ⢠monotone on
# â D
if,
if,
# D
(F(z) â F(2â)) (2-2) >0,Vz,2°â¬D. (19) | 1704.00805#21 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00805 | 22 | if,
if,
# D
(F(z) â F(2â)) (2-2) >0,Vz,2°â¬D. (19)
F(2â)) (2-2) >0,Vz,2°â¬D. if it is monotone on D
# â ⢠monotone plus on
â
â¥
â D and,
monotone plus on D if it is monotone on D and,
(z-2') =0
# D
(F(z2)-F(2â)) (z-2') =0 = F(z) = F(zâ), Vz,2' â¬D. (20)
.
⢠strictly monotone on if,
F(2â))"(2-2/)
(F(z) â F(2â))"(2-2/) > 0,Vz,2° â¬D,z #2â. 2D
0,Vz,2°
â
â
# â D | 1704.00805#22 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 23 | We have therefore obtained a differentiable âsoft entropyâ loss (w.r.t. q), which is an upper bound on the sample entropy H(p). Hence, we can indirectly minimize H(p) by minimizing ËH(Ï), treating the histogram probabilities of p as constants for gradient computation. However, we note that while qj is additive over the training data and the symbol sequence, log(qj) is not. This prevents the use of mini-batch gradient descent on ËH(Ï), which can be an issue for large scale learning problems. In this case, we can instead re-deï¬ne the soft entropy ËH(Ï) as H(q, p). As before, ËH(Ï) H(p) , but ËH(Ï) ceases to be an upper bound for H(p). The beneï¬t is that now ËH(Ï) can be as Ï â â decomposed as
L Nomb 4 f a( A() = H(qa.p) =- Ya logpj =-S> OS â hil log p;, (8) j=l i=1 l=1 j=l
# l=1 and the components l
such that we get an additive loss over the samples xi
[m]. | 1704.00648#23 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 23 | 0,Vz,2°
â
â
# â D
Clearly, strictly monotone implies monotone plus, which in turn implies monotone, pseudo monotone plus and pseudo monotone. By deï¬nition, every strictly monotone operator is an injection. We refer to an operator F as being (strictly) anti-monotone if F is (strictly) monotone. The following proposition provides a natural connection between C 1, convex functions and monotone gradient maps.
Lemma 2. A C 1 function f is convex if and only if
(Viz) - VEâ) (2-2) > 0,Vz,2' â¬domf, (22)
0,Vz,2'
(
â
â â
â
â¥
â
and strictly convex if and only if,
(Vi (2) â VF(2)) (2-2) > 0,Vz, 2â ⬠dom f,z £2â. (23)
Next, we introduce the notions of Lipschitz continuity and the two concepts are related co-coercivity, and show that through the gradient of a convex function.
Deï¬nition 4. An operator (or mapping) F : is said to be D â Rn â Rn | 1704.00805#23 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 24 | # l=1 and the components l
such that we get an additive loss over the samples xi
[m].
[m]. such that we get an additive loss over the samples xi
# â X
â
Soft-to-hard deterministic annealing. Our soft assignment scheme gives us differentiable ap- proximations ËF and ËH(Ï) of the discretized network ËF and the sample entropy H(p), respectively. However, our objective is to learn network parameters W that minimize (6) when using the encoder and decoder with hard assignments, such that we obtain a compressible symbol stream E(z) which we can compress using, e.g., arithmetic coding [40]. | 1704.00648#24 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 24 | Deï¬nition 4. An operator (or mapping) F : is said to be D â Rn â Rn
⢠Lipschitz (or L-Lipschitz) if there exists a L > 0 such that,
|F(2) â Flo < Llle - 2'll2,Vz,2' ⬠D. (24)
# Flo
# Llle
â
â¤
â
# â D
If L = 1 in (24), then F is referred to as nonexpansive. (0, 1), then F is referred to as contractive. Otherwise, if L â ⢠co-coercive (or 1 L -co-coercive) if there exists a L > 0 such
that,
1
5
. â D (25) If L = 1 in (25), then F is referred to as ï¬rmly nonexpan- sive. | 1704.00805#24 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 25 | To this end, we anneal Ï from some initial value Ï0 to inï¬nity during training, such that the soft approximation gradually becomes a better approximation of the ï¬nal hard quantization we will use. Choosing the annealing schedule is crucial as annealing too slowly may allow the network to invert the soft assignments (resulting in large weights), and annealing too fast leads to vanishing gradients too early, thereby preventing learning. In practice, one can either parametrize Ï as a function of the iteration, or tie it to an auxiliary target such as the difference between the network losses incurred by soft quantization and hard quantization (see Section 4 for details).
For a simple initialization of Ï0 and the centers ¯z(l) i i { | using SGD.
, we can sample the centers from the set
# C
by minimizing the cluster energy
{20 i ⬠[N],l ⬠[m]} and then cluster Z by minimizing the cluster energy }7,- 2 ||z â Q(z)||? using SGD.
Z ËQ(¯z)
# Image Compression | 1704.00648#25 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 25 | that,
1
5
. â D (25) If L = 1 in (25), then F is referred to as ï¬rmly nonexpan- sive.
L -co-coercive oper- ator is L-Lipschitz, in particular, every ï¬rmly nonexpansive operator is nonexpansive. However, the reverse need not be z is nonexpansive but not ï¬rmly true, for example f (z) = nonexpansive. Fortunately, the Baillon-Haddad theorem ([27, p. 40], Theorem 3.13) provides the condition for when a L- Lipschitz operator is also 1
Theorem 1. (Baillon-Haddad theorem) Let f : dom f Rn â R be a C 1, convex function on dom f and such that f is â f is L-Lipschitz continuous for some L > 0, then -co-coercive. â
â 1 L | 1704.00805#25 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 26 | Z ËQ(¯z)
# Image Compression
We now show how we can use our framework to realize a simple image compression system. For the architecture, we use a variant of the convolutional autoencoder proposed recently in [30] (see Appendix A.1 for details). We note that while we use the architecture of [30], we train it using our soft-to-hard entropy minimization method, which differs signiï¬cantly from their approach, see below. | 1704.00648#26 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 26 | â 1 L
Finally, we will introduce the notion of maximal monotonic- ity. Let H : R" â 2®" be the set-valued map, where 2°â denotes the power set of Râ. Let the graph of H be given by graH := {(u,v) ⬠R" x R"|v = Hu}. The set-valued map H is said to be monotone if (uâu')'(vâv') > 0,v⬠H(u),v' ⬠H(uâ).
â
(22)
(23)
5
Deï¬nition 5. ([25, p. 297]) Let H : Rn be monotone. Then H is maximal monotone if there exists no monotone operator G : Rn such that gra G properly contains gra H, i.e., for every (u, v)
â
# Ã gra H) (u
(u,v) ⬠gra & (V(u',v') ⬠graH) (uâw')'(vâvâ) > 0. (26) | 1704.00805#26 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 27 | Our goal is to learn a compressible representation of the features in the bottleneck of the autoencoder. Because we do not expect the features from different bottleneck channels to be identically distributed, we model each channelâs distribution with a different histogram and entropy loss, adding each entropy term to the total loss using the same β parameter. To encode a channel into symbols, we separate the channel matrix into a sequence of pw ph-dimensional patches. These patches (vectorized) form the Rd/mÃm, where m = d/(pwph), such that Z contains m (pwph)-dimensional points. columns of Z Having ph or pw greater than one allows symbols to capture local correlations in the bottleneck, which is desirable since we model the symbols as i.i.d. random variables for entropy coding. At test time, the symbol encoder E then determines the symbols in the channel by performing a nearest Rpwph , resulting in ËZ, as described above. During neighbor assignment over a set of L centers training we instead use the soft quantized ËZ, also w.r.t. the centers
# C
6
:= | 1704.00648#27 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 27 | (u,v) ⬠gra & (V(u',v') ⬠graH) (uâw')'(vâvâ) > 0. (26)
By Zornâs Lemma, every monotone operator can be ex- tended to a maximal monotone operator [24, p. 535], [25, p. 297]. For the scope of this paper, we are interested when a single-valued map is maximal monotone. The following proposition provides a simple characterization of this result [24, p. 535].
Lemma 3. If a continuous mapping F : Rn Rn is mono- tone, it is maximal monotone. In particular, every differentiable monotone mapping is maximal monotone.
V. DERIVATION OF PROPERTIES OF SOFTMAX FUNCTION
In this section we derive several properties of the softmax function using tools from convex analysis and monotone op- erator theory introduced in the previous section. We begin by establishing the connection between the log-sum-exp function and the softmax function.
It has long been known that the softmax function is the gradient map of a convex potential function [37], however, the fact that its potential function is the log-sum-exp function (i.e., (10)) is rarely discussed.5 We make this connection clear with the following proposition. | 1704.00805#27 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00805 | 28 | Proposition 1. The softmax function is the gradient of the log-sum-exp function, that is, Ï(z) =
â
Proof. Evaluating the partial derivative of lse at each compo- exp(λzi) j=1 exp(λzj)
gradient, we have,
Olse(z) exp() Ox 1 Vise(2)=] : |=azâ~â] : | =o). lse(z exp(Az; a se) X (Az) exp(Azn) O2n
â
Next, we calculate the Hessian of the log-sum-exp function (and hence the Jacobian of the softmax function).
Proposition 2. The Jacobian of the softmax function and Hessian of the log-sum-exp function is given by:
J[o(z)] = V? lse(z) = A(diag(o(z)) â o(z)o(z)"), (27)
â
â
where (27) is a symmetric positive semideï¬nite matrix and satisï¬es J[Ï(z)]1 = 0, that is, 1 is the eigenvector associated with the zero eigenvalue of J[Ï(z)].
5Although not explicitly stated, this relationship could also be found in [7, p. 93] and various other sources. | 1704.00805#28 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 29 | Figure 1: Top: MS-SSIM as a function of rate for SHA (Ours), BPG, JPEG 2000, JPEG, for each data set. Bottom: A visual example from the Kodak data set along with rate / MS-SSIM / SSIM / PSNR.
We trained different models using Adam [17], see Appendix A.2. Our training set is composed similarly to that described in [3]. We used a subset of 90,000 images from ImageNET [8], which 128 pixels, with a batch size of 15. we downsampled by a factor 0.7 and trained on crops of 128 To estimate the probability distribution p for optimizing (8), we maintain a histogram over 5,000 images, which we update every 10 iterations with the images from the current batch. Details about other hyperparameters can be found in Appendix A.2. | 1704.00648#29 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 29 | 5Although not explicitly stated, this relationship could also be found in [7, p. 93] and various other sources.
Proof. The diagonal entries of 2 lse are given by,
n A Jexp(Az) 2 exp(Az;) â exp(Azi)? 0? Ise(z) j=l awe a ; oF (22 exp(23))? j=
and the off-diagonal entries of partials, â 2 lse are given by the mixed
0? Ise(z) _ âAexp(Azp) exp(Azi) O08 (3 exp (zi)? j=l
Assembling the partial derivatives, we obtain the Hessian of lse and the Jacobian of Ï:
J[o(z)] = V? Ise(z) = A(diag(o(z)) â o(z)o(z)"). (28)
â
â
The symmetry of J[Ï(z)] comes from the symmetric struc- ture of the diagonal and outer product terms. The positive semi-deï¬niteness of J[Ï(z)] follows from an application of the Cauchy-Schwarz inequality [7, p. 74]. It can be shown through direct computation that J[Ï(z)]1 = 0 or alternatively refer to [2, p. 213]. | 1704.00805#29 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
1704.00648 | 30 | The training of our autoencoder network takes place in two stages, where we move from an identity function in the bottleneck to hard quantization. In the first stage, we train the autoencoder without any quantization. Similar to we gradually unfreeze the channels in the bottleneck during training (this gives a slight improvement over learning all channels jointly from the start). This yields an efficient weight initialization and enables us to then initialize op and C as described above. In the second stage, we minimize 6). jointly learning network weights and quantization levels. We anneal a by letting the gap between soft and hard quantization error go to zero as the number of iterations t goes to infinity. Let eg = ||F'(x) âx||2 be the soft error, e7 = || F(x) âxl|? be the hard error. With gap(t) = en âes we can denote the error between the actual the desired gap with eg(t) = gap(t) â T/(T +t) gap(0), such that the gap is halved after T iterations. We update o according to o(t + 1) = o(t) + Ke ec(t), where o(t) denotes o at iteration ¢. Fig.|3}in | 1704.00648#30 | Soft-to-Hard Vector Quantization for End-to-End Learning Compressible Representations | We present a new approach to learn compressible representations in deep
architectures with an end-to-end training strategy. Our method is based on a
soft (continuous) relaxation of quantization and entropy, which we anneal to
their discrete counterparts throughout training. We showcase this method for
two challenging applications: Image compression and neural network compression.
While these tasks have typically been approached with different methods, our
soft-to-hard quantization approach gives results competitive with the
state-of-the-art for both. | http://arxiv.org/pdf/1704.00648 | Eirikur Agustsson, Fabian Mentzer, Michael Tschannen, Lukas Cavigelli, Radu Timofte, Luca Benini, Luc Van Gool | cs.LG, cs.CV | null | null | cs.LG | 20170403 | 20170608 | [
{
"id": "1703.10114"
},
{
"id": "1609.07061"
},
{
"id": "1702.04008"
},
{
"id": "1511.06085"
},
{
"id": "1702.03044"
},
{
"id": "1609.07009"
},
{
"id": "1611.01704"
},
{
"id": "1608.05148"
},
{
"id": "1612.01543"
},
{
"id": "1607.05006"
},
{
"id": "1510.00149"
}
] |
1704.00805 | 30 | Remark 3. This result was previous noted in references such as [37], [38] and can be found in [2, p. 195][7, p. 74]. As a trivial consequence of Proposition 2, we can write the individual components of J[Ï(z)] as,
Ïj(z)), (29)
Jij[Ï(z)] = λÏi(z)(δij â
where δij is the Kronecker delta function. This representation is preferred for machine learning related applications and is loosely referred to as the âderivative of the softmaxâ [11].
Remark 4. Using the Jacobian of the softmax function given in (27), we provide the following important observation that connects the ï¬eld of evolutionary game theory with convex analysis and monotone operator theory. Let x = Ï(z), then we have,
Vv? Ise(z)|,-4(2) = A(diag(x) â ax"). (30)
Vv? Ise(z)|,-4(2) = A(diag(x) â ax"). (30) We note that this is precisely the matrix term appearing in the replicator dynamics [2, p. 229], [45], that is, | 1704.00805#30 | On the Properties of the Softmax Function with Application in Game Theory and Reinforcement Learning | In this paper, we utilize results from convex analysis and monotone operator
theory to derive additional properties of the softmax function that have not
yet been covered in the existing literature. In particular, we show that the
softmax function is the monotone gradient map of the log-sum-exp function. By
exploiting this connection, we show that the inverse temperature parameter
determines the Lipschitz and co-coercivity properties of the softmax function.
We then demonstrate the usefulness of these properties through an application
in game-theoretic reinforcement learning. | http://arxiv.org/pdf/1704.00805 | Bolin Gao, Lacra Pavel | math.OC, cs.LG | 10 pages, 4 figures. Comments are welcome | null | math.OC | 20170403 | 20180821 | [
{
"id": "1612.05628"
},
{
"id": "1602.02068"
},
{
"id": "1808.04464"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.