doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1711.00740 | 43 | Yujia Li, Daniel Tarlow, Marc Brockschmidt, and Richard Zemel. Gated graph sequence neural networks. In International Conference on Learning Representations (ICLR), 2015.
Chris J Maddison and Daniel Tarlow. Structured generative models of natural source code. In International Conference on Machine Learning (ICML), 2014.
Diego Marcheggiani and Ivan Titov. Encoding sentences with graph convolutional networks for semantic role labeling. In ACL, 2017.
Tomas Mikolov, Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. Distributed representations of words and phrases and their compositionality. In Neural Information Processing Systems (NIPS), 2013.
Jeffrey Pennington, Richard Socher, and Christopher D Manning. GloVe: Global vectors for word representation. In EMNLP, 2014.
Veselin Raychev, Martin Vechev, and Eran Yahav. Code completion with statistical language models. In Programming Languages Design and Implementation (PLDI), pp. 419â428, 2014.
Veselin Raychev, Martin Vechev, and Andreas Krause. Predicting program properties from Big Code. In Principles of Programming Languages (POPL), 2015. | 1711.00740#43 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 44 | Veselin Raychev, Martin Vechev, and Andreas Krause. Predicting program properties from Big Code. In Principles of Programming Languages (POPL), 2015.
Veselin Raychev, Pavol Bielik, and Martin Vechev. Probabilistic model for code with decision trees. In Object-Oriented Programming, Systems, Languages, and Applications (OOPSLA), 2016.
Andrew Rice, Edward Aftandilian, Ciera Jaspan, Emily Johnston, Michael Pradel, and Yulissa Arroyo-Paredes. Detecting argument selection defects. Proceedings of the ACM on Programming Languages, 1(OOPSLA):104, 2017.
Michael Schlichtkrull, Thomas N. Kipf, Peter Bloem, Rianne van den Berg, Ivan Titov, and Max Welling. Modeling relational data with graph convolutional network. arXiv preprint arXiv:1703.06103, 2017.
Armando Solar-Lezama. Program synthesis by sketching. University of California, Berkeley, 2008.
Mingzhe Wang, Yihe Tang, Jian Wang, and Jia Deng. Premise selection for theorem proving by deep graph embedding. In Advances in Neural Information Processing Systems, pp. 2783â2793, 2017.
10
Published as a conference paper at ICLR 2018 | 1711.00740#44 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 45 | 10
Published as a conference paper at ICLR 2018
(a) Precision-Recall Curve (b) Receiver Operating Characteristic (ROC) Curve
Figure 6: Precision-Recall and ROC curves for the GGNN model on VARMISUSE. Note that the y axis starts from 50%.
Table 3: Performance of GGNN model on VARMISUSE per number of type-correct, in-scope candidate variables. Here we compute the performance of the full GGNN model that uses subtokens.
# of candidates 2 3 4 5 6 or 7 8+ Accuracy on SEENPROJTEST (%) Accuracy on UNSEENPROJTEST (%) 91.6 85.7 84.5 77.1 81.8 75.7 78.6 69.0 75.1 71.5 77.5 62.4
# A PERFORMANCE CURVES
Figure 6 shows the ROC and precision-recall curves for the GGNN model. As the reader may observe, setting a false positive rate to 10% we get a true positive rate5 of 73% for the SEENPROJTEST and 69% for the unseen test. This suggests that this model can be practically used at a high precision setting with acceptable performance.
# B VARMISUSE PREDICTION SAMPLES | 1711.00740#45 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 46 | # B VARMISUSE PREDICTION SAMPLES
Below we list a set of samples from our SEENPROJTEST projects with comments about the model performance. Code comments and formatting may have been altered for typesetting reasons. The ground truth choice is underlined.
# Sample 1
for (var port = { #1 ; #2 < #3 ; #4 ++) if (!activePorts.Contains( #5 )) return #6 ; }
#1 startingFrom: 97%, endingAt: 3% #2 port: 100%, startingFrom: 0%, endingAt: 0% #3 endingAt: 100%, startingFrom: 0%, port: 0% #4 port: 100%, startingFrom: 0%, endingAt: 0% #5 port: 100%, startingFrom: 0%, endingAt: 0% #6 port: 100%, startingFrom: 0%, endingAt: 0%
> The model correctly predicts all variables in the loop.
5A 10% false positive rate is widely accepted in industry, with 30% as a maximum acceptable limit (Bessey et al., 2010).
11
Published as a conference paper at ICLR 2018
Sample 2
var path = CreateFileName( #1 ); bitmap.Save( #2 , ImageFormat.Png); return | 1711.00740#46 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 47 | 11
Published as a conference paper at ICLR 2018
Sample 2
var path = CreateFileName( #1 ); bitmap.Save( #2 , ImageFormat.Png); return
#1 name: 86%, DIR_PATH: 14% #2 path: 90%, name: 8%, DIR_PATH: 2% #3 path: 76%, name: 16%, DIR_PATH: 8%
> String variables are not confused their semantic role is inferred correctly.
# Sample 3
[global::System.Diagnostics.DebuggerNonUserCodeAttribute] public void MergeFrom(pb::CodedInputStream input) { uint tag; while ((tag = input.ReadTag()) != 0) { switch(tag) { default: input.SkipLastField(); break; case 10: { #1 .AddEntriesFrom(input, _repeated_payload_codec); break; } } } }
#1 Payload: 66%, payload_: 44%
> The model is commonly confused by aliases, i.e. variables that point to the same location in memory. In this sample, either choice would have yielded identical behavior.
# Sample 4 | 1711.00740#47 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 48 | > The model is commonly confused by aliases, i.e. variables that point to the same location in memory. In this sample, either choice would have yielded identical behavior.
# Sample 4
public override bool IsDisposed { get { lock ( #1 ) { return #2 ; } } } #1 _gate: 99%, _observers: 1% #2 _isDisposed: 90%, _isStopped: 8%, HasObservers: 2%
> The ReturnsTo edge can help predict variables that otherwise would have been impossible.
12
Published as a conference paper at ICLR 2018
Sample 5
/// <summary> /// Notifies all subscribed observers about the exception. /// </summary> /// <param name="error">The exception to send to all observers.</param> public override void OnError(Exception error) { if ( #1 == null) throw new ArgumentNullException(nameof( #2 )); var os = default(IObserver<T>[]); lock ( #3 ) { CheckDisposed(); if (! #4 ) { os = _observers.Data; _observers = ImmutableList<IObserver<T>>.Empty; #5 #6 = true; = #7 ; } } if (os != null) { foreach (var o in os) { o.OnError( #8 ); } } } | 1711.00740#48 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 49 | #1 error: 93%, _exception: 7% #2 error: 98%, _exception: 2% #3 _gate: 100%, _observers: 0% #4 _isStopped: 86%, _isDisposed: 13%, HasObservers: 1% #5 _isStopped: 91%, _isDisposed: 9%, HasObservers: 0% #6 _exception: 100%, error: 0% #7 error: 98%, _exception: 2% #8 _exception: 99%, error: 1%
> The model predicts the correct variables from all slots apart from the last. Reasoning about the last one, requires interprocedural understanding of the code across the class file.
13
Published as a conference paper at ICLR 2018
# Sample 6
private bool BecomingCommand(object message) { if (ReceiveCommand( #1 ) return true; if ( #2 .ToString() == else return false; return true; #3 ) #4 .Tell( #5 ); } #1 message: 100%, Response: 0%, Message: 0% #2 message: 100%, Response: 0%, Message: 0% #3 Response: 91%, Message: 9% #4 Probe: 98%, AskedForDelete: 2% #5 Response: 98%, Message: 2% | 1711.00740#49 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 50 | > The model predicts correctly all usages except from the one in slot #3. Reasoning about this snippet requires additional semantic information about the intent of the code.
# Sample 7
var response = ResultsFilter(typeof(TResponse), #1 , #2 , request);
#1 httpMethod: 99%, absoluteUrl: 1%, UserName: 0%, UserAgent: 0% #2 absoluteUrl: 99%, httpMethod: 1%, UserName: 0%, UserAgent: 0%
> The model knows about selecting the correct string parameters because it matches them to the formal parameter names.
# Sample 8
if ( #1 >= #2 ) throw new InvalidOperationException(Strings_Core.FAILED_CLOCK_MONITORING);
#1 n: 100%, MAXERROR: 0%, SYNC_MAXRETRIES: 0% #2 MAXERROR: 62%, SYNC_MAXRETRIES: 22%, n: 16%
> It is hard for the model to reason about conditionals, especially with rare constants as in slot #2.
14
Published as a conference paper at ICLR 2018
C NEAREST NEIGHBOR OF GGNN USAGE REPRESENTATIONS | 1711.00740#50 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 51 | 14
Published as a conference paper at ICLR 2018
C NEAREST NEIGHBOR OF GGNN USAGE REPRESENTATIONS
Here we show pairs of nearest neighbors based on the cosine similarity of the learned represen- tations u(t, v). Each slot t is marked in dark blue and all usages of v are marked in yellow (i.e. variableName ). This is a set of hand-picked examples showing good and bad examples. A brief description follows after each pair.
# Sample 1
... public void MoveShapeUp(BaseShape shape ) { != null) { if ( shape for(int i=0; i < Shapes.Count -1; i++){ if (Shapes[i] == shape ){ Shapes.Move(i, ++i); return; } } } } ...
... lock(lockObject) { if ( unobservableExceptionHanler != null) return false; unobservableExceptionHanler = handler; } ...
> Slots that are checked for null-ness have similar representations.
Sample 2 | 1711.00740#51 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 52 | > Slots that are checked for null-ness have similar representations.
Sample 2
... public IActorRef ResolveActorRef(ActorPath actorPath ){ if(HasAddress( actorPath .Address)) return _local.ResolveActorRef(RootGuardian, actorPath .ElementsWithUid); ... ... ... ActorPath actorPath ; if (TryParseCachedPath(path, out actorPath)) { if (HasAddress( actorPath .Address)){ if ( actorPath .ToStringWithoutAddress().Equals("/")) return RootGuarding; ... } ... } ...
> Slots that follow similar API protocols have similar representations. Note that the function HasAddress is a local function, seen only in the testset.
15
Published as a conference paper at ICLR 2018
Sample 3
... foreach(var filter in configuration.Filters){ GlobalJobFilter.Filters.Add( filter ); } ... ... public void Count_ReturnsNumberOfElements(){ _collection.Add( _filterInstance ); Assert.Equal(1, _collection.Count); } ...
> Adding elements to a collection-like object yields similar representations.
# D DATASET
The collected dataset and its characteristics are listed in Table 4. The full dataset as a set of projects and its parsed JSON will become available online. | 1711.00740#52 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 53 | # D DATASET
The collected dataset and its characteristics are listed in Table 4. The full dataset as a set of projects and its parsed JSON will become available online.
Table 4: Projects in our dataset. Ordered alphabetically. kLOC measures the number of non-empty lines of C# code. Projects marked with Devwere used as a development set. Projects marked with â were in the test-only dataset. The rest of the projects were split into train-validation-test. The dataset contains in total about 2.9MLOC. | 1711.00740#53 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 54 | Name Git SHA kLOCs Slots Vars Description Akka.NET 719335a1 240 51.3k Framework AutoMapper BenchmarkDotNet BotBuilder choco commandlineâ CommonMark.NETDev Dapper EntityFramework Hangï¬re Humanizerâ Leanâ Nancy Newtonsoft.Json Ninject NLog Opserver OptiKey orleans Polly 2ca7c2b5 1670ca34 190117c3 93985688 09677b16 f3d54530 931c700d fa0b7ec8 ffc4912f cc11a77e f574bfd7 72e1f614 6057d9b8 7006297f 643e326a 51b032e7 7d35c718 e0d6a150 0afdbc32 46 28 44 36 11 14 18 263 33 27 190 70 123 13 75 24 34 300 32 3.7k 5.1k 6.4k 3.8k 1.1k 2.6k 3.3k 33.4k 3.6k 2.4k 26.4k 7.5k 14.9k 0.7k 8.3k 3.7k 6.1k 30.7k 3.8k 10.7k Object-to-Object Mapping Library quartznet ravendbDev RestSharp | 1711.00740#54 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 55 | 3.7k 6.1k 30.7k 3.8k 10.7k Object-to-Object Mapping Library quartznet ravendbDev RestSharp Rx.NET scriptcs ServiceStack ShareX SignalR Wox b33e6f86 55230922 70de357b 2d146fe5 f3cc8bcb 6d59da75 718dd711 fa88089e cdaf6272 49 647 20 180 18 231 125 53 13 9.6k 78.0k 4.0k 14.0k 2.7k 38.0k 22.3k 6.5k 2.0k Library 9.8k Scheduler 82.7k Document Database 4.5k REST and HTTP API Client Library 21.9k Reactive Language Extensions 4.3k C# Text Editor 46.2k Web Framework 18.1k 10.5k 2.1k Application Launcher Sharing Application Push Notiï¬cation Framework | 1711.00740#55 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1711.00740 | 56 | 16
Published as a conference paper at ICLR 2018
For this work, we released a large portion of the data, with the exception of projects with a GPL license. The data can be found at https://aka.ms/iclr18-prog-graphs-dataset. Since we are excluding some projects from the data, below we report the results, averaged over three runs, on the published dataset:
Accuracy (%) PR AUC SEENPROJTEST UNSEENPROJTEST 84.0 74.1 0.976 0.934
17 | 1711.00740#56 | Learning to Represent Programs with Graphs | Learning tasks on source code (i.e., formal languages) have been considered
recently, but most work has tried to transfer natural language methods and does
not capitalize on the unique opportunities offered by code's known syntax. For
example, long-range dependencies induced by using the same variable or function
in distant locations are often not considered. We propose to use graphs to
represent both the syntactic and semantic structure of code and use graph-based
deep learning methods to learn to reason over program structures.
In this work, we present how to construct graphs from source code and how to
scale Gated Graph Neural Networks training to such large graphs. We evaluate
our method on two tasks: VarNaming, in which a network attempts to predict the
name of a variable given its usage, and VarMisuse, in which the network learns
to reason about selecting the correct variable that should be used at a given
program location. Our comparison to methods that use less structured program
representations shows the advantages of modeling known structure, and suggests
that our models learn to infer meaningful names and to solve the VarMisuse task
in many cases. Additionally, our testing showed that VarMisuse identifies a
number of bugs in mature open-source projects. | http://arxiv.org/pdf/1711.00740 | Miltiadis Allamanis, Marc Brockschmidt, Mahmoud Khademi | cs.LG, cs.AI, cs.PL, cs.SE | Published in ICLR 2018. arXiv admin note: text overlap with
arXiv:1705.07867 | null | cs.LG | 20171101 | 20180504 | [
{
"id": "1611.08307"
},
{
"id": "1703.06103"
},
{
"id": "1709.06182"
},
{
"id": "1704.01212"
},
{
"id": "1603.04467"
},
{
"id": "1609.02907"
}
] |
1710.11469 | 0 | 9 1 0 2
r p A 3 1 ] L M . t a t s [ 5 v 9 6 4 1 1 . 0 1 7 1 : v i X r a
# Conditional Variance Penalties and Domain Shift Robustness
Christina Heinze-Deml & Nicolai Meinshausen Seminar for Statistics ETH Zurich Zurich, Switzerland {heinzedeml,meinshausen}@stat.math.ethz.ch
Abstract When training a deep neural network for image classiï¬cation, one can broadly distinguish between two types of latent features of images that will drive the classiï¬cation. We can divide latent features into (i) âcoreâ or âconditionally invariantâ features X core whose distri- bution X core|Y , conditional on the class Y , does not change substantially across domains and (ii) âstyleâ features X style whose distribution X style|Y can change substantially across domains. Examples for style features include position, rotation, image quality or brightness but also more complex ones like hair color, image quality or posture for images of persons. Our goal is to minimize a loss that is robust under changes in the distribution of these style features. In contrast to previous work, we assume that the domain itself is not observed and hence a latent variable. | 1710.11469#0 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 1 | We do assume that we can sometimes observe a typically discrete identiï¬er or âID variableâ. In some applications we know, for example, that two images show the same person, and ID then refers to the identity of the person. The proposed method requires only a small fraction of images to have ID information. We group observations if they share the same class and identiï¬er (Y, ID) = (y, id) and penalize the conditional variance of the prediction or the loss if we condition on (Y, ID). Using a causal framework, this conditional variance regularization (CoRe) is shown to protect asymptotically against shifts in the distribution of the style variables. Empirically, we show that the CoRe penalty improves predictive accuracy substantially in settings where domain changes occur in terms of image quality, brightness and color while we also look at more complex changes such as changes in movement and posture. Keywords: Domain shift; Dataset shift; Causal models; Distributional robustness; Anti- causal prediction; Image classiï¬cation
# 1. Introduction | 1710.11469#1 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 1 | # ABSTRACT
As neural networks grow deeper and wider, learning networks with hard-threshold activations is becoming increasingly important, both for network quantization, which can drastically reduce time and energy requirements, and for creating large in- tegrated systems of deep networks, which may have non-differentiable components and must avoid vanishing and exploding gradients for effective learning. However, since gradient descent is not applicable to hard-threshold functions, it is not clear how to learn networks of them in a principled way. We address this problem by observing that setting targets for hard-threshold hidden units in order to minimize loss is a discrete optimization problem, and can be solved as such. The discrete opti- mization goal is to ï¬nd a set of targets such that each unit, including the output, has a linearly separable problem to solve. Given these targets, the network decomposes into individual perceptrons, which can then be learned with standard convex ap- proaches. Based on this, we develop a recursive mini-batch algorithm for learning deep hard-threshold networks that includes the popular but poorly justiï¬ed straight- through estimator as a special case. Empirically, we show that our algorithm improves classiï¬cation accuracy in a number of settings, including for AlexNet and ResNet-18 on ImageNet, when compared to the straight-through estimator.
# INTRODUCTION | 1710.11573#1 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 2 | # 1. Introduction
Deep neural networks (DNNs) have achieved outstanding performance on prediction tasks like visual object and speech recognition (Krizhevsky et al., 2012; Szegedy et al., 2015; He et al., 2015). Issues can arise when the learned representations rely on dependencies that vanish in test distributions (see for example Quionero-Candela et al. (2009); Torralba and Efros (2011); Csurka (2017) and references therein). Such domain shifts can be caused by changing conditions such as color, background or location changes. Predictive performance is then likely to degrade. For example, consider the analysis presented in Kuehlkamp et al. (2017) which is concerned with the problem of predicting a personâs gender based on images of their iris. The results indicate that this problem is more diï¬cult than previous studies
1 | 1710.11469#2 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 2 | # INTRODUCTION
The original approach to neural classiï¬cation was to learn single-layer models with hard-threshold ac- tivations, like the perceptron (Rosenblatt, 1958). However, it proved difï¬cult to extend these methods to multiple layers, because hard-threshold units, having zero derivative almost everywhere and being discontinuous at the origin, cannot be trained by gradient descent. Instead, the community turned to multilayer networks with soft activation functions, such as the sigmoid and, more recently, the ReLU, for which gradients can be computed efï¬ciently by backpropagation (Rumelhart et al., 1986). | 1710.11573#2 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 3 | 1
have suggested due to the remaining eï¬ect of cosmetics after segmenting the iris from the whole image.1 Previous analyses obtained good predictive performance on certain datasets but when testing on a dataset only including images without cosmetics accuracy dropped. In other words, the high predictive performance previously reported relied to a signiï¬cant extent on exploiting the confounding eï¬ect of mascara on the iris segmentation which is highly predictive for gender. Rather than the desired ability of discriminating based on the irisâ texture the systems would mostly learn to detect the presence of cosmetics.
More generally, existing biases in datasets used for training machine learning algorithms tend to be replicated in the estimated models (Bolukbasi et al., 2016). For an example involving Googleâs photo app, see Crawford (2016) and Emspak (2016). In §5 we show many examples where unwanted biases in the training data are picked up by the trained model. As any bias in the training data is in general used to discriminate between classes, these biases will persist in future classiï¬cations, raising also considerations of fairness and discrimination (Barocas and Selbst, 2016). | 1710.11469#3 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 3 | This approach has enjoyed remarkable success, enabling researchers to train networks with hundreds of layers and learn models that have signiï¬cantly higher accuracy on a variety of tasks than any previous approach. However, as networks become deeper and wider, there has been a growing trend towards using hard-threshold activations for quantization purposes, where they enable binary or low-precision inference (e.g., Hubara et al. (2016); Rastegari et al. (2016); Zhou et al. (2016); Lin & Talathi (2016); Zhu et al. (2017)) and training (e.g., Lin et al. (2016); Li et al. (2017); Tang et al. (2017); Micikevicius et al. (2017)), which can greatly reduce the energy and computation time required by modern deep networks. Beyond quantization, the scale of the output of hard-threshold units is independent of (or insensitive to) the scale of their input, which can alleviate vanishing and exploding gradient issues and should help avoid some of the pathologies that occur during low-precision training with backpropagation (Li et al., 2017). Avoiding these issues is crucial for developing large systems of deep networks that can be used to perform even more complex tasks. | 1710.11573#3 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 4 | Addressing the issues outlined above, we propose Conditional variance Regularization (CoRe) to give diï¬erential weight to diï¬erent latent features. Conceptually, we take a causal view of the data generating process and categorize the latent data generating factors into âconditionally invariantâ (core) and âorthogonalâ (style) features, as in Gong et al. (2016). The core and style features are unobserved and can in general be highly nonlinear transformations of the observed input data. It is desirable that a classiï¬er uses only the core features as they pertain to the target of interest in a stable and coherent fashion. Basing a prediction on the core features alone yields stable predictive accuracy even if the style features are altered. CoRe yields an estimator which is approximately invariant under changes in the conditional distribution of the style features (conditional on the class labels) and it is asymptotically robust with respect to domain shifts, arising through interventions on the style features. CoRe relies on the fact that for certain datasets we can observe grouped observations in the sense that we observe the same object under diï¬erent conditions. Rather than | 1710.11469#4 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 4 | For these reasons, we are interested in developing well-motivated and efï¬cient techniques for learning deep neural networks with hard-threshold units. In this work, we propose a framework for learning deep hard-threshold networks that stems from the observation that hard-threshold units output discrete values, indicating that combinatorial optimization may provide a principled method for training these networks. By specifying a set of discrete targets for each hidden-layer activation, the network
1
decomposes into many individual perceptrons, each of which can be trained easily given its inputs and targets. The difï¬culty in learning a deep hard-threshold network is thus in setting the targets so that each trained perceptron â including the output units â has a linearly separable problem to solve and thus can achieve its targets. We show that networks in which this is possible can be learned using our mixed convex-combinatorial optimization framework. | 1710.11573#4 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 5 | the fact that for certain datasets we can observe grouped observations in the sense that we observe the same object under diï¬erent conditions. Rather than pooling over all examples, CoRe exploits knowledge about this grouping, i.e., that a number of instances relate to the same object. By penalizing between-object variation of the prediction less than variation of the prediction for the same object, we can steer the prediction to be based more on the latent core features and less on the latent style features. While the proposed methodology can be motivated from the desire the achieve representational invariance with respect to the style features, the causal framework we use throughout this work allows to precisely formulate the distribution shifts we aim to protect against. | 1710.11469#5 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 5 | Building on this framework, we then develop a recursive algorithm, feasible target propagation (FTPROP), for learning deep hard-threshold networks. Since this is a discrete optimization problem, we develop heuristics for setting the targets based on per-layer loss functions. The mini-batch version of FTPROP can be used to explain and justify the oft-used straight-through estimator (Hinton, 2012; Bengio et al., 2013), which can now be seen as an instance of FTPROP with a speciï¬c choice of per-layer loss function and target heuristic. Finally, we develop a novel per-layer loss function that improves learning of deep hard-threshold networks. Empirically, we show improvements for our algorithm over the straight-through estimator on CIFAR-10 for two convolutional networks and on ImageNet for AlexNet and ResNet-18, with multiple types of hard-threshold activation.
# RELATED WORK | 1710.11573#5 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 6 | The remainder of this manuscript is structured as follows: §1.1 starts with a few mo- tivating examples, showing simple settings where the style features change in the test dis- tribution such that standard empirical risk minimization approaches would fail. In §1.2 we review related work, introduce notation in §2 and in §3 we formally introduce conditional variance regularization CoRe. In §4, CoRe is shown to be asymptotically equivalent to minimizing the risk under a suitable class of strong interventions in a partially linear classi- ï¬cation setting, provided one chooses suï¬ciently strong CoRe penalties. We also show that
1. Segmenting eyelashes from the iris is not entirely accurate which implies that the iris images can still contain parts of eyelashes, occluding the iris. As mascara causes the eyelashes to be thicker and darker, it is diï¬cult to entirely remove the presence of cosmetics from the iris images.
2
the population CoRe penalty induces domain shift robustness for general loss functions to ï¬rst order in the intervention strength. The size of the conditional variance penalty can be shown to determine the size of the distribution class over which we can expect distributional robustness. In §5 we evaluate the performance of CoRe in a variety of experiments. | 1710.11469#6 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 6 | # RELATED WORK
The most common method for learning deep hard-threshold networks is to use backpropagation with the straight-through estimator (STE) (Hinton, 2012; Bengio et al., 2013), which simply replaces the derivative of each hard-threshold unit with the identity function. The STE is used in the quantized net- work literature (see citations above) to propagate gradients through quantized activations, and is used in Shalev-Shwartz et al. (2017) for training with ï¬at activations. Later work generalized the STE to replace the hard-threshold derivative with other functions, including saturated versions of the identity function (Hubara et al., 2016). However, while the STE tends to work quite well in practice, we know of no rigorous justiï¬cation or analysis of why it works or how to choose replacement derivatives. Beyond being unsatisfying in this regard, the STE is not well understood and can lead to gradient mis- match errors, which compound as the number of layers increases (Lin & Talathi, 2016). We show here that the STE, saturated STE, and all types of STE that we have seen are special cases of our framework, thus providing a principled justiï¬cation for it and a basis for exploring and understanding alternatives. | 1710.11573#6 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 7 | (i) Causal framework and distributional robustness. We provide a causal frame- work to deï¬ne distributional shifts for style variables. Our framework allows that the domain variable itself is latent.
(ii) Conditional variance penalties. We introduce conditional variance penalties and show two robustness properties in Theorems 1 and 2.
(iii) Software. We illustrate our ideas using synthetic and real-data experiments. A TensorFlow implementation of CoRe as well as code to reproduce some of the exper- imental results are available at https://github.com/christinaheinze/core.
# 1.1 Motivating examples | 1710.11469#7 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 7 | Another common approach to training with hard-threshold units is to use randomness, either via stochastic neurons (e.g., Bengio et al. (2013); Hubara et al. (2016)) or probabilistic training methods, such as those of Soudry et al. (2014) or Williams (1992), both of which are methods for softening hard-threshold units. In contrast, our goal is to learn networks with deterministic hard-threshold units.
Finally, target propagation (TP) (LeCun, 1986; 1987; Carreira-PerpiËn´an & Wang, 2014; Bengio, 2014; Lee et al., 2015; Taylor et al., 2016) is a method that explicitly associates a target with the output of each activation in the network, and then updates each layerâs weights to make its activations more similar to the targets. Our framework can be viewed as an instance of TP that uses combinatorial optimization to set discrete targets, whereas previous approaches employed continuous optimization to set continuous targets. The MADALINE Rule II algorithm (Winter & Widrow, 1988) can also be seen as a special case of our framework and of TP, where only one target is set at a time.
# 2 LEARNING DEEP NETWORKS WITH HARD-THRESHOLD UNITS | 1710.11573#7 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 8 | # 1.1 Motivating examples
To motivate the methodology we propose, consider the examples shown in Figures 1 and 2. Example 1 shows a setting where a linear decision boundary is suitable. Panel (a) in Figure 1 shows a subsample of the training data where class 1 is associated with red points, dark blue points correspond to class 0. If we were asked to draw a decision boundary based on the training data, we would probably choose one that is approximately horizontal. The style feature here corresponds to a linear direction (1, â0.75)t. Panel (b) shows a subsample of the test set where the style feature is intervened upon for class 1 observations: class 1 is associated with orange squares, cyan squares correspond to class 0. Clearly, a horizontal decision boundary would have misclassiï¬ed all test points of class 1. | 1710.11469#8 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 9 | Example 2 shows a setting where a nonlinear decision boundary is required. Here, the core feature corresponds to the distance from the origin while the style feature corresponds to the angle between the x1-axis and the vector from the origin to (x1, x2). Panel (c) shows a subsample of the training data and panel (d) additionally shows a subsample of the test data where the styleâi.e. the distribution of the angleâis intervened upon. Clearly, a circular decision boundary yields optimal performance on both training and test set but is unlikely to be found by a standard classiï¬cation algorithm when only using the training set for the estimation. We will return to these examples in §3.4. | 1710.11469#9 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 9 | with weight matrices W = {Wy : Wa ⬠R'*"-1}6_| and element-wise activation function g(x) = sign(x), where sign is the sign function such that sign(x) = 1 if x > 0 and â1 oth- erwise. Each layer d has ng units, where we define no = n for the input layer, and we let ha = g(Wa...g(W1x)...) denote the output of each hidden layer, where hg = (hat,---, Rang) and hg ⬠{â1, +1} for each layer d and each unit j. Similarly, we let zy = Wa g(...9g(W1x)...) denote the pre-activation output of layer d. For compactness, we have incorporated the bias term into the weight matrices. We denote a row or column of a matrix Wy as Wy,.; and W4,;., respectively, and the entry in the jth row and kth column as Wy, ;,. Using matrix notation, we can write this model as Y = f(X;W) = 9(We...g(W1X)...), where X is the n x m matrix of dataset instances and Y is the np x m matrix of outputs. We let J? denote the matrix of | 1710.11573#9 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 10 | Lastly, we introduce a strong dependence between the class label and the style feature âimage qualityâ in the third example by manipulating the face images from the CelebA dataset (Liu et al., 2015): in the training set images of class âwearing glassesâ are associated with a lower image quality than images of class ânot wearing glassesâ. Examples are shown in Figure 2(a). In the test set, this relation is reversed, i.e. images showing persons wearing glasses are of higher quality than images of persons without glasses, with examples in Figure 2(b). We will return to this example in §5.3 and show that training a convolutional neural network to distinguish between people wearing glasses or not works well on test data that are drawn from the same distribution (with error rates below 2%) but fails entirely on the shown test data, with error rates worse than 65%.
3
(a) Example 1, training set. (b) Example 1, test set. (c) Example 2, training set. (d) Example 2, test set. | 1710.11469#10 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 11 | (a) Example 1, training set. (b) Example 1, test set. (c) Example 2, training set. (d) Example 2, test set.
Figure 1: Motivating examples 1 and 2: a linear example in (a) and (b) and a nonlinear example in (c) and (d). The distributions are shifted in test data by style interventions where style in example (a/b) is the linear direction (1, â0.75) and the polar angle in example (c/d). Standard estimators achieve error rates of 0% on the training data and test data drawn from the same distribution as the training data (panels (a) and (c), respectively). On the shown test set where the distribution of the style conditional on Y has changed the error rates are > 50% (panels (b) and (d), respectively).
4
(a) Example 3, training set. (b) Example 3, test set. | 1710.11469#11 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 12 | Figure 2: Motivating example 3: The goal is to predict whether a person is wearing glasses. The distributions are shifted in test data by style interventions where style is the image quality. A 5-layer CNN achieves 0% training error and 2% test error for images that are sampled from the same distribution as the training images (a), but a 65% error rate on images where the confounding between image quality and glasses is changed (b). See §5.3 for more details.
# 1.2 Related work
For general distributional robustness, the aim is to learn
argming sup Epr(¢(Y, fo(X))) (1) FEF
for a given set F of distributions, twice differentiable and convex loss ¢, and prediction fo(x). The set F is the set of distributions on which one would like the estimator to achieve a guaranteed performance bound. | 1710.11469#12 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 12 | Figure 1: After setting the hidden-layer targets T1 of a deep hard-threshold network, the network decomposes into independent perceptrons, which can then be learned with standard methods.
at layer d. Our goal will be to learn f by finding the weights W that minimize an aggregate loss L(Y,Te) = 3", L(y, t) for some convex per-instance loss L(y, t). i=l In the simplest case, a hard-threshold network with no hidden layers is a perceptron Y = g(W,X), as introduced by|Rosenblatt|(1958). The goal of learning a perceptron, or any hard-threshold network, is to classify unseen data. A useful first step is to be able to correctly classify the training data, which we focus on here for simplicity when developing our framework; however, standard generalization tech- niques such as regularization are easily incorporated into this framework and we do this for the exper- iments. Since a perceptron is a linear classifier, it is only able to separate a linearly-separable dataset. Definition 1. A dataset {(x, t)}"â¢, is linearly separable iff there exists a vector w ⬠Râ anda real number y > 0 such that (w-x)t® > y for alli =1...m. | 1710.11573#12 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 13 | Causal inference can be seen to be a speciï¬c instance of distributional robustness, where we take F to be the class of all distributions generated under do-interventions on X (Mein- shausen, 2018; Rothenh¨ausler et al., 2018). Causal models thus have the deï¬ning advantage that the predictions will be valid even under arbitrarily large interventions on all predictor variables (Haavelmo, 1944; Aldrich, 1989; Pearl, 2009; Sch¨olkopf et al., 2012; Peters et al., 2016; Zhang et al., 2013, 2015; Yu et al., 2017; Rojas-Carulla et al., 2018; Magliacane et al., 2018). There are two diï¬culties in transferring these results to the setting of domain shifts in image classiï¬cation. The ï¬rst hurdle is that the classiï¬cation task is typically anti-causal since the image we use as a predictor is a descendant of the true class of the object we are interested in rather than the other way around. The second challenge is that we do not want (or could) guard against arbitrary interventions on any or all variables but only would like to guard against a shift of the style features. It is hence not immediately obvious how standard causal inference can be used to guard against large domain shifts. | 1710.11469#13 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 13 | When a dataset is linearly separable, the perceptron algorithm is guaranteed to ï¬nd its separating hy- perplane in a ï¬nite number of steps (Novikoff, 1962), where the number of steps required is dependent on the size of the margin γ. However, linear separability is a very strong condition, and even simple functions, such as XOR, are not linearly separable and thus cannot be learned by a perceptron (Minsky & Papert, 1969). We would thus like to be able to learn multilayer hard-threshold networks. | 1710.11573#13 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 14 | Another line of work uses a class of distributions of the form F = F,.(Fo) with
Fo) := {distributions Fâ such that D(F, Fo) < e}, (2)
with ⬠> 0 asmall constant and D(F, Fo) being, for example, a ¢-divergence (Namkoong and Fo can be the true (but generally unknown) population distribution P from which the data were drawn or its empirical counterpart P,. The distributionally robust targets in Eq. (2) can often be expressed in penalized form (Gao et al.| 2017} Sinha et al.| 2018} Xu et al.
5 | 1710.11469#14 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 14 | Consider a simple single-hidden-layer hard-threshold network Y = f(X;W) = g(W2 g(WX)) = g(W,H;) for a dataset D = (X,T2), where H, = g(W,X) are the hidden-layer activations. An example of such a network is shown on the left side of Figure [I] Clearly, Y and Hj, are both collections of (single-layer) perceptrons. Backpropagation cannot be used to train the input layerâs weights W, because of the hard-threshold activations but, since each hidden activation h;,; is the output of a perceptron, if we knew the value t;; ⬠{â1, +1} that each hidden unit should take for each input x, we could then use the perceptron algorithm to set the first-layer weights, Wi, to produce these target values. We refer to t;; as the target of h,;. Given a matrix of hidden-layer targets T, ⬠{-1,+1}"*â¢, each layer (and in fact each perceptron in each layer) can be learned separately, as they no longer depend on each other, where the goal of perceptron learning is to update the weights of each layer d so that its | 1710.11573#14 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 15 | 2009). A Wasserstein-ball is a suitable class of distributions for example in the context of adversarial examples (Sinha et al., 2018; Szegedy et al., 2014; Goodfellow et al., 2015).
In this work, we do not try to achieve robustness with respect to a set of distributions that are pre-deï¬ned by a Kullback-Leibler divergence or a Wasserstein metric as in Eq. (2). We try to achieve robustness against a set of distributions that are generated by interven- tions on latent style variables. We will formulate the class of distributions over which we try to achieve robustness as in Eq. (1) but with the class of distributions in Eq. (2) now replaced with
Fξ = {F : Dstyle(F, F0) ⤠ξ}, (3)
where F0 is again the distribution the training data are drawn from. The diï¬erence to standard distributional robustness approaches listed below Eq. (2) is now that the metric Dstyle measures the shift of the orthogonal style features. We do not know a priori which features are prone to distributional shifts and which features have a stable (conditional) distribution. The metric is hence not known a priori and needs to be inferred in a suitable sense from the data. | 1710.11469#15 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 15 | each layer) can be learned separately, as they no longer depend on each other, where the goal of perceptron learning is to update the weights of each layer d so that its activations Hy equal its targets Tz given inputs Ty_1. Figure|1]shows an example of this decomposition. We denote the targets of an ¢-layer network as T = {T),..., To}, where T), fork = 1...â¬â 1 are the hidden-layer targets and T/ are the dataset targets. We often let To = X for notational convenience. | 1710.11573#15 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 16 | Similar to this work in terms of their goals are the work of Gong et al. (2016) and Domain-Adversarial Neural Networks (DANN) proposed in Ganin et al. (2016), an approach motivated by the work of Ben-David et al. (2007). The main idea of Ganin et al. (2016) is to learn a representation that contains no discriminative information about the origin of the input (source or target domain). This is achieved by an adversarial training procedure: the loss on domain classiï¬cation is maximized while the loss of the target prediction task is minimized simultaneously. The data generating process assumed in Gong et al. (2016) is similar to our model, introduced in §2.1, where we detail the similarities and diï¬erences between the models (cf. Figure 3). Gong et al. (2016) identify the conditionally independent features by adjusting a transformation of the variables to minimize the squared MMD distance between distributions in diï¬erent domains2. The fundamental diï¬erence between these very promising methods and our approach is that we use a diï¬erent data basis. The domain identiï¬er is explicitly observable | 1710.11469#16 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 16 | Auxiliary-variable-based approaches, such as ADMM (Taylor et al., 2016; Carreira-PerpiËn´an & Wang, 2014) and other target propagation methods (LeCun, 1986; Lee et al., 2015) use a similar process for decomposing the layers of a network; however, these focus on continuous variables and impose (soft) constraints to ensure that each activation equals its auxiliary variable. We take a different approach here, inspired by the combinatorial nature of the problem and the perceptron algorithm. | 1710.11573#16 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 17 | between these very promising methods and our approach is that we use a diï¬erent data basis. The domain identiï¬er is explicitly observable in Gong et al. (2016) and Ganin et al. (2016), while it is latent in our approach. In contrast, we exploit the presence of an identiï¬er variable ID that relates to the identity of an object (for example identifying a person). In other words, we do not assume that we have data from diï¬erent domains but just diï¬erent realizations of the same object under diï¬erent interventions. This also diï¬erentiates this work from latent domain adaptation papers from the computer vision literature (Hoï¬man et al., 2012; Gong et al., 2013). Further related work is discussed in §6. | 1710.11469#17 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 17 | Since the final layer is a perceptron, the training instances can only be separated if the hidden-layer activations H are linearly separable with respect to the dataset targets T>. Thus, the hidden-layer targets T; must be set such that they are linearly separable with respect to the dataset targets T, since the hidden-layer targets T) are the intended values of their activations H,. However, in order to ensure that the hidden-layer activations H will equal their targets T, after training, the hidden-layer targets T; must be able to be produced (exactly) by the first layer, which is only possible if the hidden-layer targets T, are also linearly separable with respect to the inputs X. Thus, a sufficient condition for f(X; W) to separate the data is that the hidden-layer targets induce linear separability in all units in both layers of the network. We refer to this property as feasibility. Definition 2. A setting of the targets T = {T,,..., Ty} of an ¢-layer deep hard-threshold network f(X;W) is feasible for a dataset D = (X, Ty) iff for each unit j =1...nqineach layerd =1...¢ the dataset formed by its inputs Ta_ and targets T,,;, is linearly separable, where Ty = X.
3 | 1710.11573#17 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 18 | # 2. Setting
We introduce the assumed underlying causal graph and some notation before discussing notions of domain shift robustness.
2. The distinction between âconditionally independentâ features and âconditionally transferableâ (which is the former modulo location and scale transformations) is for our purposes not relevant as we do not make a linearity assumption in general.
6
(a) (b) Domain D Domain D Y â ID Y â X core X style(â) X core X style(â) image X(â) fθ ËY (X(â)) image X(â) fθ ËY (X(â))
â
1
1
, | 1710.11469#18 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 18 | 3
Feasibility is a much weaker condition than linear separability, since the output decision boundary of a multilayer hard-threshold network with feasible targets is in general highly nonlinear. It follows from the definition of feasibility and convergence of the perceptron algorithm that if a feasible setting of a networkâs targets on a dataset exists, the network can separate the training data. Proposition 1. Let D = {(x, ¢)} be a dataset and let f(X;W) be an ¢-layer hard-threshold network with feasible targets T = {T,,...,Tv} in which each layer d of f was trained separately with inputs Ty_1 and targets Ty, where Ty & X, then f will correctly classify each instance x, such that f(x©;W)t® > 0 for alli =1...m. | 1710.11573#18 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 19 | â
1
1
,
Figure 3: Observed quantities are shown as shaded nodes; nodes of latent quantities are transparent. Left: data generating process for the considered model as in Gong et al. (2016), where the eï¬ect of the domain on the orthogonal features X style is mediated via unobserved noise â. The style interventions and all its descendants are shown as nodes with dashed borders to highlight variables that are aï¬ected by style interventions. Right: our setting. The domain itself is unobserved but we can now observe the (typically discrete) ID variable we use for grouping. The arrow between ID and Y can be reversed, depending on the sampling scheme.
7
# 2.1 Causal graph | 1710.11469#19 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 19 | Learning a deep hard-threshold network thus reduces to ï¬nding a feasible setting of its targets and then optimizing its weights given these targets, i.e., mixed convex-combinatorial optimization. The simplest method for this is to perform exhaustive search on the targets. Exhaustive search iterates through all possible settings of the hidden-layer targets, updating the weights of each perceptron whose inputs or targets changed, and returns the weights and feasible targets that result in the lowest loss. While impractical, exhaustive search is worth brieï¬y examining to better understand the solution space. In particular, because of the decomposition afforded by setting the targets, exhaustive search over just the targets is sufï¬cient to learn the globally optimal deep hard-threshold network, even though the weights are learned by gradient descent. Proposition 2. If a feasible setting of a deep hard-threshold networkâs targets on a dataset D exists, then exhaustive search returns the global minimum of the loss in time exponential in the number of hidden units. | 1710.11573#19 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 20 | 7
# 2.1 Causal graph
Let Y â Y be a target of interest. Typically Y = R for regression or Y = {1, . . . , K} in classiï¬cation with K classes. Let X â Rp be predictor variables, for example the p pixels of an image. The causal structural model for all variables is shown in the panel (b) of Figure 3. The domain variable D is latent, in contrast to Gong et al. (2016) whose model is shown in panel (a) of Figure 3. We add the ID variable whose distribution can change conditional on Y . In Figure 3, Y â ID but in some settings it might be more plausible to consider ID â Y . For the proposed method both options are possible. Together with Y , the ID variable is used to group observations. It is typically discrete and relates to the identity of the underlying object (identity of a person, for example). The variable can be assumed to be latent in the setting of Gong et al. (2016). | 1710.11469#20 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 20 | Learning can be improved and feasibility relaxed if, instead of the perceptron algorithm, a more robust method is used for perceptron learning. For example, a perceptron can be learned for a non-linearly- separable dataset by minimizing the hinge loss L(z,t) = max(0, 1 â tz), a convex loss on the per- ceptronâs pre-activation output z and target ¢ that maximizes the margin when combined with L2 reg- ularization. In general, however, any method for learning linear classifiers can be used. We denote the loss used to train the weights of a layer d as La, where the loss of the final layer Ly is the output loss. | 1710.11573#20 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 21 | The rest of the graph is in analogy to Gong et al. (2016). The prediction is anti- causal, that is the predictor variables X that we use for ËY are non-ancestral to Y . In other words, the class label is here seen to be causal for the image and not the other way around3. The causal eï¬ect from the class label Y on the image X is mediated via two types of latent variables: the so-called core or âconditionally invariantâ features X core and the orthogonal or style features X style. The distinguishing factor between the two is that external interventions â are possible on the style features but not on the core features. If the interventions â have diï¬erent distributions in diï¬erent domains, then the conditional distributions X core|Y = y, ID = id are invariant for all (y, id) while X style|Y = y, ID = id can change. The style variable can include point of view, image quality, resolution, rotations, color changes, body posture, movement etc. and will in general be context-dependent4. The style intervention variable â inï¬uences both the latent style X style, and hence also In potential outcome notation, | 1710.11469#21 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 21 | At the other end of the search spectrum is hill climbing. In each iteration, hill climbing evaluates all neighboring states of the current state (i.e., target settings that differ from the current one by only one target) and chooses the one with the lowest loss. The search halts when none of the new states improve the loss. Each state is evaluated by optimizing the weights of each perceptron given the stateâs targets, and then computing the output loss. Hill climbing is more practical than exhaustive search, since it need not explore an exponential number of states, and it also provides the same local optima guarantee as gradient descent on soft-threshold networks. Proposition 3. Hill climbing on the targets of a deep hard-threshold network returns a local minimum of the loss, where each iteration takes time linear in the size of the set of proposed targets.
Exhaustive search and hill climbing comprise two ends of the discrete optimization spectrum. Beam search, which maintains a beam of the most promising solutions and explores each, is another powerful approach that contains both hill climbing and exhaustive search as special cases. In general, however, any discrete optimization algorithm can be used for setting targets. For example, methods from satisï¬ability solving, integer linear programming, or constraint satisfaction might work well, as the linear separability requirements of feasibility can be viewed as constraints on the search space. | 1710.11573#21 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 22 | in general be context-dependent4. The style intervention variable â inï¬uences both the latent style X style, and hence also In potential outcome notation, we let X style(â = δ) be the style under the image X. intervention â = δ and X(Y, ID, â = δ) the image for class Y , identity ID and style intervention â. The latter is sometimes abbreviated as X(â = δ) for notational simplicity. Finally, fθ(X(â = δ)) is the prediction under the style intervention â = δ. For a formal justiï¬cation of using a causal graph and potential outcome notation simultaneously see Richardson and Robins (2013). | 1710.11469#22 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 22 | We believe that our mixed convex-combinatorial optimization framework opens many new avenues for developing learning algorithms for deep networks, including those with non-differentiable modules. In the following section, we use these ideas to develop a learning algorithm that hews much closer to standard methods, and in fact contains the straight-through estimator as a special case.
# 3 FEASIBLE TARGET PROPAGATION
The open question from the preceding section is how to set the hidden-layer targets. Generating good, feasible targets for the entire network at once is a difï¬cult problem; instead, an easier approach is to propose targets for only one layer at a time. As in backpropagation, it makes sense to start from the output layer, since the ï¬nal-layer targets are given, and successively set targets for each upstream layer. Further, since it is hard to know a priori if a setting of a layerâs targets is feasible for a given network architecture, a simple alternative is to set the targets for a layer d and then optimize the upstream weights (i.e., weights in layers j ⤠d ) to check if the targets are feasible. Since the goals
4 | 1710.11573#22 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 23 | To be speciï¬c, if not mentioned otherwise we will assume a causal graph as follows. For independent εY , εID, εstyle in R, R, Rq respectively with positive density on their support and continuously diï¬erentiable functions ky, kid, and kstyle, kcore, kx,
Y â ky(D, εY ) identiï¬er ID â kid(Y, εID) core or conditionally invariant features X core â kcore(Y, ID) style or orthogonal features X style â kstyle(Y, ID, εstyle) + â image X â kx(X core, X style). (4)
If an existing image is classiï¬ed by a human, then the image is certainly ancestral for the attached label. If the label Y refers, however, to the underlying true object (say if you generate images by asking people to take pictures of objects), then the more ï¬tting model is the one where Y is ancestral for X. | 1710.11469#23 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 23 | 4
when optimizing a layerâs weights and when setting its upstream targets (i.e., its inputs) are the same â namely, to induce feasibility â a natural method for setting target values is to choose targets that reduce the layerâs loss Ld. However, because the targets are discrete, moves in target space are large and non-smooth and cannot be guaranteed to lower the loss without actually performing the move. Thus, heuristics are necessary. We discuss these in more detail below.
Determining feasibility of the targets at layer d can be done by recursively updating the weights of layer d and proposing targets for layer d â 1 given the targets for layer d. This recursion continues until the input layer is reached, where feasibility (i.e., linear separability) can be easily determined by optimizing that layerâs weights given its targets and the dataset inputs. The targets at layer d can then be updated based on the information gained from the recursion and, if the upstream weights were altered, based on the new outputs of layer d â 1. We call this recursive algorithm feasible target propagation, or FTPROP. Pseudocode is shown in Algorithm 1. | 1710.11573#23 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 24 | 4. The type of features we regard as style and which ones we regard as core features can conceivably change depending on the circumstancesâfor instance, is the color âgrayâ an integral part of the object âelephantâ or can it be changed so that a colored elephant is still considered to be an elephant?
8
Hence, the core features are assumed to be a deterministic function of Y and ID. The prediction Ëy for y, given X = x, is of the form fθ(x) for a suitable function fθ with parameters θ â Rd, where the parameters θ correspond to the weights in a DNN, for example.
# 2.2 Data | 1710.11469#24 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 24 | Algorithm 1 Train an ¢-layer hard-threshold network Y = f(X;W) on dataset D = (X,7) with feasible target propagation (FTPROP) using loss functions L = {La}4_y- 1: initialize weights W = {W,,..., We} randomly 2: initialize targets T),...,TZ_1 as the outputs of their hidden units in f(X;W) 3: set Ty < X and set T <~ {T,T,...,Ty} 4: FTPROP(W, T, L, £) // train the network by searching for a feasible target setting
d=1.
5: function FTPROP(weights W , targets T , losses L, and layer index d) 6: 7: 8: 9: 10: 11: 12: 13: 14:
optimize Wd with respect to layer loss Ld(Zd, Td) if activations Hd = g(WdTdâ1) equal the targets Td then return True else if this is the ï¬rst layer (i.e., d = 1) then return False while computational budget of this layer not exceeded do
// check feasibility; Zd = WdTdâ1 // feasible // infeasible // e.g., determined by beam search | 1710.11573#24 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 25 | # 2.2 Data
We assume we have n data points (xj, y;,id;) for i = 1,...,n, where the observations id; with i = 1,...,n of variable ID can also contain unobserved values. Let m < n be the number of unique realizations of (Y,ID) and let $1,...,Sm be a partition of {1,...,n} such that, for each j ⬠{1,...,m}, the realizations (y;,id;) are identical} for all i ⬠Sj. While our prime application is classification, regression settings with continuous Y can be approximated in this framework by slicing the range of the response variable into distinct bins in analogy to the approach in sliced inverse regression {I991). The cardinality of S; is denoted by nj; := |Sj| > 1. Then n = 0, n, is again the total number of samples and c= n-â mis the total number of grouped observations. Typically nj; = 1 for most samples and occasionally n; > 2 but one can also envisage scenarios with larger groups of the same identifier (y, id).
# 2.3 Domain shift robustness | 1710.11469#25 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 25 | // check feasibility; Zd = WdTdâ1 // feasible // infeasible // e.g., determined by beam search
6: optimize W, with respect to layer loss La(Za, Ta) /I check feasibility; Za = WaTaâ1
Tdâ1 â heuristically set targets for upstream layer to reduce layer loss Ld(Zd, Td) if FTPROP(W, T, L, d â 1) then
// check if targets Tdâ1 are feasible
optimize Wd with respect to layer loss Ld(Zd, Td) if activations Hd = g(WdTdâ1) equal the targets Td then return True
// feasible | 1710.11573#25 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 26 | # 2.3 Domain shift robustness
In this section, we clarify against which classes of distributions we hope to achieve robust- ness. Let £ be a suitable loss that maps y and 7 = f(x) to R*. The risk under distribution F and parameter 0 is given by
Ep|Y, fo(X))].
Let F0 be the joint distribution of (ID, Y, X style) in the training distribution. A new domain and explicit interventions on the style features can now shift the distribution of (ID, Y, ËX style) to F . We can measure the distance between distributions F0 and F in dif- ferent ways. Below we will deï¬ne the distance considered in this work and denote it by Dstyle(F, F0). Once deï¬ned, we get a class of distributions
Fξ = {F : Dstyle(F0, F ) ⤠ξ} (5)
and the goal will be to optimize a worst-case loss over this distribution class in the sense of Eq. (1), where larger values of ξ aï¬ord protection against larger distributional changes. The relevant loss for distribution class Fξ is then
1e(0) = sup Er [e(Â¥: fo(X))]- (6)
In the limit of arbitrarily strong interventions on the style features X style, the loss is given by | 1710.11469#26 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 26 | // feasible
As the name implies, FTPROP is a form of target propagation (LeCun, 1986; 1987; Lee et al., 2015) that uses discrete optimization to set discrete targets, instead of using continuous optimization to set continuous targets. FTPROP is also highly related to RDIS (Friesen & Domingos, 2015), a powerful nonconvex optimization algorithm based on satisï¬ability (SAT) solvers that recursively chooses and sets subsets of variables in order to decompose the underlying problem into simpler subproblems. While RDIS is applied only to continuous problems, the ideas behind RDIS can be generalized to discrete variables via the sum-product theorem (Friesen & Domingos, 2016). This suggests an interesting connection between FTPROP and SAT that we leave for future work.
Of course, modern deep networks will not always have a feasible setting of their targets for a given dataset. For example, a convolutional layer imposes a large amount of structure on its weight matrix, making it less likely that the layerâs input will be linearly separable with respect to its targets. Further, ensuring feasibility will in general cause learning to overï¬t the training data, which will worsen generalization performance. Thus, we would like to relax the feasibility requirements. | 1710.11573#26 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 27 | In the limit of arbitrarily strong interventions on the style features X style, the loss is given by
Loo(@) = Jim, sup Ep [e(Â¥, fo(X))]. (7)
5. Observations where the ID variable is unobserved are not grouped, that is each such observation is counted as a unique observation of (Y, ID).
9
Minimizing the loss Lâ(θ) with respect to θ guarantees an accuracy in prediction which will work well across arbitrarily large shifts in the conditional distribution of the style features. A natural choice to deï¬ne Dstyle is to use a Wasserstein-type distance (see e.g. Villani, 2003). We will ï¬rst deï¬ne a distance Dy,id for the conditional distributions
X style|Y = y, ID = id and ËX style|Y = y, ID = id, | 1710.11469#27 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 27 | In addition, there are many beneï¬ts of using mini-batch instead of full-batch training, including improved generalization gap (e.g., see LeCun et al. (2012) or Keskar et al. (2016)), reduced memory usage, the ability to exploit data augmentation, and the prevalence of tools (e.g., GPUs) designed for it.
Fortunately, it is straightforward to convert FTPROP to a mini-batch algorithm and to relax the feasibility requirements. In particular, since it is important not to overcommit to any one mini-batch, the mini-batch version of FTPROP (i) only updates the weights and targets of each layer once per mini-batch; (ii) only takes a small gradient step on each layerâs weights, instead of optimizing them fully; (iii) sets the targets of the downstream layer in parallel with updating the current layerâs weights, since the weights will not change much; and (iv) removes all checks for feasibility. We call this algorithm FTPROP-MB and present pseudocode in Algorithm 2. FTPROP-MB closely resembles backpropagation-based methods, allowing us to easily implement it with standard libraries.
5 | 1710.11573#27 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 28 | X style|Y = y, ID = id and ËX style|Y = y, ID = id,
and then set D(F0, F ) = E(DY,ID), where the expectation is with respect to random ID and labels Y . The distance Dy,id between the two conditional distributions of X style will be deï¬ned as a Wasserstein W 2 2 (F0, F )-distance for a suitable cost function c(x, Ëx). Specif- ically, let Î y,id be the couplings between the conditional distributions of X style and ËX style, meaning measures supported on Rq à Rq such that the marginal distribution over the ï¬rst q components is equal to the distribution of X style and the marginal distribution over the remaining q components equal to the distribution of ËX style. Then the distance between the conditional distributions is deï¬ned as
Ele(a, %)],
Dy,id = min M âÎ y,id
where c: R? x R?+4 Rt is a nonnegative, lower semi-continuous cost function. Here, we focus on a Mahalanobis distance as cost
c2(x, Ëx) = (x â Ëx)tΣâ1 y,id(x â Ëx). | 1710.11469#28 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 28 | 5
Algorithm 2 Train an ¢-layer hard-threshold network Y = f(X;W) on dataset D = (X,T;) with mini-batch feasible target propagation (FTPROP-MB) using loss functions L = {La}§_4. 1: initialize weights W = {W,,..., We} randomly 2: for each minibatch (X,, Ty) from D do 3: initialize targets T),..., ne 1 as the outputs of their hidden units in f (Xp; W) // forward pass 4: set Ty < Xp, set Tp < Ty, and set T < {T,..., Te} 5: | FTPROP-MB(W,T, L, 0)
initialize targets T),..., 1 as the outputs of their hidden units in f (Xp; W) // forward pass set Ty < Xp, set Tp < Ty, and set T < {T,..., Te} FTPROP-MB(W,T, L, 0)
6: function FTPROP-MB(weights W , targets T , losses L, and layer index d) 7: 8: 9: | 1710.11573#28 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 29 | c2(x, Ëx) = (x â Ëx)tΣâ1 y,id(x â Ëx).
The cost of a shift is hence measured against the variability under the distribution F0, Σy,id = Cov(X style|Y, ID)6.
# 3. Conditional variance regularization
# 3.1 Pooled estimator
Let (ai,yi) for i = 1,...,n be the observations that constitute the training data and Ui = fo(a;) the prediction for y;. The standard approach is to simply pool over all avail- able observations, ignoring any grouping information that might be available. The pooled estimator thus treats all examples identically by summing over the empirical loss as GP! â areming Bley, fo(X))| +7-pen(9), (8)
Ëθpool = argminθ ËE + γ · pen(θ), (8)
where the ï¬rst part is simply the empirical loss over the training data,
n 1 n BEY. fo(X))] = â D7 e(yi- folas)). Mia1 | 1710.11469#29 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 29 | 6: function FTPROP-MB(weights W , targets T , losses L, and layer index d) 7: 8: 9:
Ti + set targets for upstream layer based on current weights Wa and loss La(Za, Ta) update Wg with respect to layer loss La(Za, Ta) M where Za = WaTa-1 = WaHa-1 if d > 1 then FTPROP-MB(W, {Tp,...,Tu-1,-.-,Te}, L, dâ1)
3.1 TARGET HEURISTICS
When the activations of each layer are differentiable, backpropagation provides a method for telling each layer how to adjust its outputs to improve the loss. Conversely, in hard-threshold networks, target propagation provides a method for telling each layer how to adjust its outputs to improve the next layerâs loss. While gradients cannot propagate through hard-threshold units, the derivatives within a layer can still be computed. An effective and efï¬cient heuristic for setting the target tdj for an activation hdj of layer d is to use the (negative) sign of the partial derivative of the next layerâs loss. Speciï¬cally, we set tdj = r(hdj), where
a] r(haj) = sien (- Sha, 5p bati(Za41, Tun) (2) | 1710.11573#29 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 30 | n 1 n BEY. fo(X))] = â D7 e(yi- folas)). Mia1
In the second part, pen(@) is a complexity penalty, for example a squared f:-norm of the weights 6 in a convolutional neural network as a ridge penalty. All examples that compare to the pooled estimator will include a ridge penalty as default.
6. As an example, if the change in distribution for X style is caused by random shift-interventions â, then
Xstvle L xestyle 4 A, and the distance Dgtyie induced in the distributions is Datyie(Fo, F) < E[E(AâD 7 aAlY = y, ID = id)],
# Datyie(Fo, F) < E[E(AâD 7
ensuring that the strength of the shifts is measured against the natural variability Σy,id of the style features.
10
3.2 CoRe estimator The CoRe estimator is deï¬ned in Lagrangian form for penalty λ ⥠0 as
6°"°(\) = argming Bay, fo(X))| +2A-Cy. (9)
The penalty ËCθ is a conditional variance penalty of the form
# Crue := B Cove = | 1710.11469#30 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 31 | When used to update only a single target at a time, this heuristic will often set the target value that correctly results in the lowest loss. In particular, when Lg+1 is convex, its negative partial derivative with respect to hg; by definition points in the direction of the global minimum of Lg+1. Without loss of generality, let hyy = â1. Now, if r(haj) = â1, then it follows from the convexity of the loss that flipping hg; and keeping all other variables the same would increase L441. On the other hand, if r(hg;) = +1, then flipping hy; may or may not reduce the loss, since convexity cannot tell us which of ha; = +1 or hg; = â1 results in a smaller L4,1. However, the discrepancy between hg; and r(hq;) indicates a lack of confidence in the current value of hy. A natural choice is thus to set ta; to push the pre-activation value of ha; towards 0, making hg; more likely to flip. Setting taj = (ha) = +1 accomplishes this. We note that, while this heuristic performs well, there is still room for improvement, for example by extending | 1710.11573#31 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 32 | where typically ν â {1/2, 1}. For ν = 1/2, we also refer to the respective penalties as âconditional-standard-deviationâ penalties. In the equivalent constrained form, the estima- tor can be viewed as an instance of a restricted maximum likelihood estimator (Harville, In practice in the context of classiï¬cation and 1974; Verbeke and Molenberghs, 2009). DNNs, we apply the penalty (10) to the predicted logits. The conditional-variance-of-loss penalty (11) takes a similar form to Namkoong and Duchi (2017). The crucial diï¬erence of our approach to Namkoong and Duchi (2017) is that we penalize with the expected condi- tional variance or standard deviation. The fact that we take a conditional variance is here important as we try to achieve distributional robustness with respect to interventions on the style variables. Conditioning on ID allows to guard speciï¬cally against these interventions. An unconditional variance penalty, in contrast, can achieve robustness against a pre-deï¬ned class of distributions such as a ball of distributions deï¬ned | 1710.11469#32 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 33 | 3.2 LAYER LOSS FUNCTIONS
The hinge loss, shown in Figure 2a, is a robust version of the perceptron criterion and is thus a natural per-layer loss function to use for ï¬nding good settings of the targets and weights, even when there are no feasible target settings. However, in preliminary experiments we found that learning tended to stall and become erratic over time when using the hinge loss for each layer. We attribute this to two separate issues. First, the hinge loss is sensitive to noisy data and outliers (Wu & Liu, 2007), which can cause learning to focus on instances that are unlikely to ever be classiï¬ed correctly, instead of on instances near the separator. Second, since with convolutional layers and large, noisy datasets it is unlikely that a layerâs inputs are entirely linearly separable, it is important to prioritize some targets over others. Ideally, the highest priority targets would be those with the largest effect on the output loss. | 1710.11573#33 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 34 | Before showing numerical examples, we discuss the estimation of the expected condi- tional variance in §3.3 and return to the simple examples of §1.1 in §3.4. Domain shift robustness in a classiï¬cation setting for a partially linear version of the structural equation model (4) is shown in §4.1. Furthermore, we discuss the population limit of Ëθcore(λ) in §4.2, where we show that the regularization parameter λ ⥠0 is proportional to the size of the future style interventions that we want to guard against for future test data.
# 3.3 Estimating the expected conditional variance
Recall that Sj â {1, . . . , n} contains samples with identical realizations of (Y, ID) for j â {1, . . . , m}. For each j â {1, . . . , m}, deï¬ne ˵θ,j as the arithmetic mean across all fθ(xi), i â Sj. The canonical estimator of the conditional variance ËCf,1,θ is then
m A ESD 1 OS p(a,) â fy)? ig < 2S fol; Cho =F > 5) Se (fo(2:) jte.j) , where Hog = Sil > fo(xi) j=l | 1710.11469#34 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 34 | The ï¬rst issue can be solved by saturating (truncating) the hinge loss, thus making it less sensitive to outliers (Wu & Liu, 2007). The saturated hinge loss, shown in Figure 2b, is sat hinge(z, t; b) = max(0, 1 â max(tz, b)) for some threshold b, where we set b = â1 to make its derivative symmetric. The second problem can be solved in a variety of ways, including randomly subsampling targets or weighting the loss associated with each target according to some heuristic. The simplest and most accurate method that we have found is to weight the loss for each target tdj by the magnitude of the
6
(a) (b) (c) (d)
Figure 2: Figures (a)-(c) show different per-layer loss functions (solid blue line) and their derivatives (dashed red line). Figure (d) shows the quantized ReLU activation (solid blue line), which is a sum of step functions, its corresponding sum of saturated-hinge-loss derivatives (dashed red line), and the soft-hinge-loss approximation to this sum that was found to work best (dotted yellow line).
partial derivative of the next layerâs loss Ld+1 with respect to the targetâs hidden unit hdj, such that | 1710.11573#34 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 35 | and analogously for the conditional-variance-of-loss, deï¬ned in Eq. (11)7. If there are no groups of samples that share the same identiï¬er (y, id), we deï¬ne ËCf,1,θ to vanish. The CoRe estimator is then identical to pooled estimation in this special case.
7. The right hand side can also be interpreted as the graph Laplacian (Belkin et al., 2006) of an appropriately weighted graph that fully connects all observations i â Sj for each j â {1, . . . , m}.
11
# 3.4 Motivating examples (continued)
We revisit the ï¬rst and the second example from §1.1. Figure 4 shows subsamples of the respective training and test sets with the estimated decision boundaries for diï¬erent values of the penalty parameter λ; in both examples, n = 20000 and c = 500. Additionally, grouped examples that share the same (y, id) are visualized: two grouped observations are In each example, there are ten such groups connected by a line or curve, respectively. visualized (better visible in the nonlinear example). | 1710.11469#35 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 36 | Panel (a) shows the linear decision boundaries for λ = 0, equivalent to the pooled estimator, and for CoRe with λ â {.1, 1}. The pooled estimator misclassiï¬es all test points of class 1 as can be seen in panel (b), suï¬ering from a test error of â 51%. In contrast, the decision boundary of the CoRe estimator with λ = 1 aligns with the direction along which the grouped observations vary, classifying the test set with almost perfect accuracy (test error is â 0%).
Panels (c) and (d) show the corresponding plots for the second example for penalty values λ â {0, 0.05, 0.1, 1}. While all of them yield good performance on the training set, only a value of λ = 1, which is associated with a circular decision boundary, achieves almost perfect accuracy on the test set (test error is â 0%). The pooled estimator suï¬ers from a test error of â 58%.
4. Domain shift robustness for the CoRe estimator | 1710.11469#36 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 36 | OLati 8) La(zaj, taj) = sat_hinge(zg,taj) - oh dj
While the saturated hinge loss works well, if the input zdj ever moves out of the range [â1, +1] then its derivative will become zero and the unit will no longer be trainable. To avoid this, we propose the soft hinge loss, shown in Figure 2c, where soft hinge(z, t) = tanh(âtz) + 1. Like the saturated hinge, the soft hinge has slope 1 at the threshold and has a symmetric derivative; however, it also beneï¬ts from having a larger input region with non-zero derivative. Note that Bengio et al. (2013) report that using the derivative of a sigmoid as the STE performed worse than the identity function. Based on our experiments with other loss functions, including variations of the squared hinge loss and the log loss, this is most likely because the slope of the sigmoid is less than unity at the threshold, which causes vanishing gradients. Loss functions with asymmetric derivatives around the threshold also seemed to perform worse than those with symmetric derivatives (e.g., the saturating and soft hinge losses). In our experiments, we show that the soft hinge loss outperforms the saturated hinge loss for both sign and quantized ReLU activations, which we discuss below. | 1710.11573#36 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 37 | 4. Domain shift robustness for the CoRe estimator
We show two properties of the CoRe estimator. First, consistency is shown under the risk deï¬nition (7) for an inï¬nitely large conditional variance penalty and the logistic loss in a partially linear structural equation model. Second, the population CoRe estimator is shown to achieve distributional robustness against shift interventions in a ï¬rst order expansion.
# 4.1 Asymptotic domain shift robustness under strong interventions
We analyze the loss under strong domain shifts, as given in Eq. (7), for the pooled and the CoRe estimator in a one-layer network for binary classiï¬cation (logistic regression) in an asymptotic setting of large sample size and strong interventions. | 1710.11469#37 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11469 | 38 | Assume the structural equation for the image X ⬠R? is linear in the style features Xstyle ⬠R@ (with generally p >> q) and we use logistic regression to predict the class label Y ⬠{-1,1}. Let the interventions A ⬠R¢ act additively on the style features X*Y'* (this is only for notational convenience) and let the style features X**Â¥!* act in a linear way on the image X via a matrix W ⬠R?*? (this is an important assumption without which results are more involved). The core or âconditionally invariantâ features are X°® ⬠Râ, where in general r < p but this is not important for the following. For independent ey, erp, éstyle in R, R, R? respectively with positive density on their support and continuously differentiable
12
(a) Example 1, training set.
# (b) Example 1, test set.
# Y=0 (train)
e Y=0 (train)
e Y=1 (train) aa] 10 7 3 o *Â¥ â wt a 4 o 4 N _I oe, 1 ~ 1 12
# x
# Y=0 (test)
Q¥-5 ° Y=1 (train) Y=1 (test) 10 o 7 3 | a 4 o J N _J 1 1 10 12 | 1710.11469#38 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 38 | When each loss term in each hidden layer is scaled by the magnitude of the partial derivative of its downstream layerâs loss and each target is set based on the sign of the same partial derivative, then target propagation transmits information about the output loss to every layer in the network, despite the hard-threshold units. Interestingly, this combination of loss function and target heuristic can exactly reproduce the weight updates of the straight-through estimator (STE). Speciï¬cally, the weight updates that result from using the scaled saturated hinge loss from (3) and the target heuristic in (2) are exactly those of the saturated straight-through estimator (SSTE) deï¬ned in Hubara et al. (2016), which replaces the derivative of sign(z) with 1|z|â¤1, where 1(·) is the indicator function. Other STEs correspond to different choices of per-layer loss function. For example, the original STE corresponds to the linear loss L(z, t) = âtz with the above target heuristic. This connection provides a justiï¬cation for existing STE approaches, which can now each be seen as an instance of FTPROP with a particular choice of per-layer loss function and target heuristic. We believe that this will enable more principled investigations and extensions of these methods in future work.
3.4 QUANTIZED ACTIVATIONS | 1710.11573#38 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11573 | 39 | 3.4 QUANTIZED ACTIVATIONS
Straight-through estimation is also commonly used to backpropagate through quantized variants of standard activations, such as the ReLU. Figure 2d shows a quantized ReLU (qReLU) with 6 evenly-spaced quantization levels. The simplest and most popular straight-through estimator (STE) for qReLU is to use the derivative of the saturated (or clipped) ReLU â sat ReLU(x) = 10<x<1, where sat ReLU(x) = min(1, max(x, 0)). However, if we instead consider the qReLU activation from the viewpoint of FTPROP, then the qReLU becomes a (normalized) sum of step functions qReLU(z) = 1 kâ1 ), where step(z) = 1 if z > 0 and 0 otherwise, and is a linear k transformation of sign(z). The resulting derivative of the sum of saturated hinge losses (one for each step function) is shown in red in Figure 2d, and is clearly quite different than the STE described above. In initial experiments, this performed as well as or better than the STE; however, we achieved additional performance improvements by using the softened approximation shown in yellow in Figure 2d, which is simply the derivative of a soft hinge that has been scaled and shifted to match the
7 | 1710.11573#39 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 40 | Figure 4: The decision boundary as function of the penalty parameters λ for the examples 1 and 2 from Figure 1. There are ten pairs of samples visualized that share the same identiï¬er (y, id) and these are connected by a line resp. a curve in the ï¬gures (better visible in panels (c) and (d)). The decision boundary associated with a solid line corresponds to λ = 0, the standard pooled estimator that ignores the groupings. The broken lines are decision boundaries for increasingly strong penalties, taking into account the groupings in the data. Here, we only show a subsample of the data to avoid overplotting.
13
functions ky, kid, kstyle, kcore, kx,
class Y â ky(D, εY ) identiï¬er ID â kid(Y, εID) core or conditionally invariant features X core â kcore(Y, ID) style or orthogonal features X style â kstyle(Y, ID, εstyle) + â image X â kx(X core) + W X style. (12)
We assume a logistic regression as a prediction of Y from the image data X:
fθ(x) := exp(xtθ) 1 + exp(xtθ) . | 1710.11469#40 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 40 | 7
Table 1: The best top-1 test accuracy for each network over all epochs when trained with sign, qReLU, and full-precision baseline activations on CIFAR-10 and ImageNet. The hard-threshold activations are trained with both FTPROP-MB with per-layer soft hinge losses (FTP-SH) and the saturated straight-through estimator (SSTE). Bold numbers denote the best performing quantized activation in each experiment.
Sign qReLU Baselines SSTE FTP-SH SSTE FTP-SH ReLU Sat. ReLU 4-layer convnet (CIFAR-10) 80.6 81.3 85.6 85.5 86.5 87.3 8-layer convnet (CIFAR-10) 84.6 84.9 88.4 89.8 91.2 91.2 AlexNet (ImageNet) 46.7 47.3 59.4 60.7 61.3 61.9 ResNet-18 (ImageNet) 49.1 47.8 60.6 64.3 69.1 66.9
qReLU domain. This is a natural choice because the derivative of a sum of a small number of soft hinge losses has a shape similar to that of the derivative of a single soft hinge loss.
# 4 EXPERIMENTS | 1710.11573#40 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 41 | We assume a logistic regression as a prediction of Y from the image data X:
fθ(x) := exp(xtθ) 1 + exp(xtθ) .
Given training data with n samples, we estimate 6 with 6 and use here a logistic loss f6(yi, vi) = log(1 + exp(âyi(x{6))).
The formulation of Theorem 1 relies on the following assumptions.
Assumption 1 We require the following conditions:
(A1) Assume the conditional distribution X*¥'°|Y = y,ID = id under the training distri- bution Fo has positive density (with respect to the Lebesgue measure) in an e-ball in fo-norm around the origin for some «> 0 for ally ⬠Y and id é⬠T.
(A2) Assume the matrix W has full rank q.
(A3) Let M ⤠n be the number of unique realizations among n iid samples of (Y, ID) and let pn := P (M ⤠n â q). Assume that pn â 1 for n â â. | 1710.11469#41 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 41 | # 4 EXPERIMENTS
We evaluated FTPROP-MB with soft hinge per-layer losses (FTP-SH) for training deep networks with sign and 2- and 3-bit qReLU activations by comparing models trained with FTP-SH to those trained with the saturated straight-through estimators (SSTEs) described earlier (although, as discussed, these SSTEs can also be seen as instances of FTPROP-MB). We compared to these SSTEs because they are the standard approach in the literature and they signiï¬cantly outperformed the STE in our initial exper- iments (Hubara et al. (2016) observed similar behavior). Computationally, FTPROP-MB has the same performance as straight-through estimation; however, the soft hinge loss involves computing a hyper- bolic tangent, which requires more computation than a piecewise linear function. This is the same per- formance difference seen when using sigmoid activations instead of ReLUs in soft-threshold networks. We also trained each model with ReLU and saturated-ReLU activations as full-precision baselines. | 1710.11573#41 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 42 | Assumption (A3) guarantees that the number c = n â m of grouped examples is at least as large as the dimension of the style variables. If we have too few or no grouped examples (small c), we cannot estimate the conditional variance accurately. Under these assumptions we can prove domain shift robustness.
Theorem 1 (Asymptotic domain shift robustness under strong interventions) Under model (12) and Assumption 1, with probability 1, the pooled estimator (8) has inï¬nite loss (7) under arbitrarily large shifts in the distribution of the style features,
Lâ(Ëθpool) = â. The CoRe estimator (9) Ëθcore with λ â â is domain shift robust under strong interventions in the sense that for n â â,
Lâ(Ëθcore) âp inf θ Lâ(θ). | 1710.11469#42 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 42 | We did not use weight quantization because our main interest is training with hard-threshold ac- tivations, and because recent work has shown that weights can be quantized with little effect on performance (Hubara et al., 2016; Rastegari et al., 2016; Zhou et al., 2016). We tested these training methods on the CIFAR-10 (Krizhevsky, 2009) and ImageNet (ILSVRC 2012) (Russakovsky et al., 2015) datasets. On CIFAR-10, we trained a simple 4-layer convolutional network and the 8-layer convolutional network of Zhou et al. (2016). On ImageNet, we trained AlexNet (Krizhevsky et al., 2012), the most common model in the quantization literature, and ResNet-18 (He et al., 2015a). Further experiment details are provided in Appendix A, along with learning curves for all experiments. Code is available at https://github.com/afriesen/ftprop.
4.1 CIFAR-10 | 1710.11573#42 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 43 | Lâ(Ëθcore) âp inf θ Lâ(θ).
A proof is given in §A. The respective ridge penalties in both estimators (8) and (9) are assumed to be zero for the proof, but the proof can easily be generalized to include ridge penalties that vanish suï¬ciently fast for large sample sizes. The Lagrangian regularizer λ is assumed to be inï¬nite for the CoRe estimator to achieve domain shift robustness under these strong interventions. The next section considers the population CoRe estimator in a setting with weak interventions and ï¬nite values of the penalty parameter.
14
4.2 Population domain shift robustness under weak interventions The previous theorem states that the CoRe estimator can achieve domain shift robustness under strong interventions for an inï¬nitely strong penalty in an asymptotic setting. An open question is how the loss (6),
L,(9) = up Ep es fo(X))] ⬠| 1710.11469#43 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 43 | 4.1 CIFAR-10
Test accuracies for the 4-layer and 8-layer convolutional networks on CIFAR-10 are shown in Table 1. For the 4-layer model, FTP-SH shows a consistent 0.5-1% accuracy gain over SSTE for the entire training trajectory, resulting in the 0.7% improvement shown in Table 1. However, for the 2-bit qRELU activation, SSTE and FTP-SH perform nearly identically in the 4-layer model. Conversely, for the more complex 8-layer model, the FTP-SH accuracy is only 0.3% above SSTE for the sign activation, but for the qReLU activation FTP-SH achieves a consistent 1.4% improvement over SSTE. | 1710.11573#43 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 44 | L,(9) = up Ep es fo(X))] â¬
behaves under interventions of small to medium size and correspondingly smaller values of the penalty. Here, we aim to minimize this loss for a given value of ξ and show that domain shift robustness can be achieved to ï¬rst order with the population CoRe estimator using the conditional-standard-deviation-of-loss penalty, i.e., Eq. (11) with ν = 1/2, by choosing an appropriate value of the penalty λ. Below we will show this appropriate choice of the penalty weight is λ =
Assumption 2 (B1) Deï¬ne the loss under a deterministic shift δ as
ho(d) = Er le(Y, fo(X))],
where the expectation is with respect to random (ID, Y, ËX style) â¼ Fθ, with Fθ deï¬ned by the deterministic shift intervention ËX style = X style + δ and (ID, Y, ËX style) â¼ F0. Assume that for all θ â Î, hθ(δ) is twice continuously diï¬erentiable with bounded second derivative for a deterministic shift δ â Rq. | 1710.11469#44 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 44 | We posit that the decrease in performance gap for the sign activation when moving from the 4- to 8- layer model is because both methods are able to effectively train the higher-capacity model to achieve close to its best possible performance on this dataset, whereas the opposite is true for the qReLU activation; i.e., the restricted capacity of the 4-layer model limits the ability of both methods to train the more expressive qReLU effectively. If this is true, then we expect that FTP-SH will outperform SSTE for both the sign and qReLU activations on a harder dataset. Unsurprisingly, none of the low- precision methods perform as well as the baseline high-precision methods; however, the narrowness of the performance gap between 2-bit qReLU with FTP-SH and full-precision ReLU is encouraging.
4.2 IMAGENET
The results from the ImageNet experiments are also shown in Table 1. As predicted from the CIFAR- 10 experiments, we see that FTP-SH improves test accuracy on AlexNet for both sign and 2-bit
8
gReLU (FTP-SH) qReLU (SSTE) ReLU Saturated ReLU â Sign (FTP-SH) â Sign (SSTE) a & a Ss - & 2 8 a Ss ES & Top-1 Accuracy Top-1 Accuracy Epoch Epoch | 1710.11573#44 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 45 | (B2) The spectral norm of the conditional variance Σy,id of X style|Y, ID under F0 is assumed to be smaller or equal to some ζ â R for all y â Y and id â I.
The ï¬rst assumption (B1) ensures that the loss is well behaved under interventions on the style variables. The second assumption (B2) allows to take the limit of small conditional variances in the style variables.
â
If setting λ = ξ and using the conditional-standard-deviation-of-loss penalty, the
CoRE estimator optimizes according to gore(\/E) = argming
gore(\/E) = argming Ep, [0(Y, fo(X))] + VE+CerpreThe next theorem shows that this is to ï¬rst order equivalent to minimizing the worst-case loss over the distribution class Fξ. The following result holds for the population CoRe estimator, see below for a discussion about consistency.
Theorem 2 The supremum of the loss over the class of distribution Fe is to first-order given by the expected loss under distribution Fo with an additional conditional-standard- deviation-of-loss penalty Cp 12,6 | 1710.11469#45 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 45 | Figure 3: The top-1 train (thin dashed lines) and test (thicker solid lines) accuracies for AlexNet with different activation functions on ImageNet. The inset ï¬gures show the test accuracy for the ï¬nal 25 epochs in detail. In both ï¬gures, FTPROP-MB with soft hinge (FTP-SH, red) outperforms the saturated straight-through estimator (SSTE, blue). The left ï¬gure shows the network with sign activations. The right ï¬gure shows that the 2-bit quantized ReLU (qReLU) trained with our method (FTP-SH) performs nearly as well as the full-precision ReLU. Interestingly, saturated ReLU outperforms standard ReLU. Best viewed in color. | 1710.11573#45 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 46 | sup Ep(e(Y, fo(X))] = Er [â¬(Y, fo(X))] + VE+ Coro + O(max{E,¢}). (13)
A proof is given in Appendix §B. The objective of the population CoRe estimator matches ξ. Larger thus to ï¬rst order the loss under domain shifts if we set the penalty weight λ =
15
anticipated domain shifts thus require naturally a larger penalty λ in the CoRe estimation. The result is possible as we have chosen the Mahalanobis distance to measure shifts in the style variable and deï¬ne Fξ, ensuring that the strength of shifts on style variables are measured against the natural variance on the training distribution F0.
In practice, the choice of λ involves a somewhat subjective choice about the strength of the distributional robustness guarantee. A stronger distributional robustness property is traded oï¬ against a loss in predictive accuracy if the distribution is not changing in the future. One option for choosing λ is to choose the largest penalty weight before the validation loss increases considerably. This approach would provide the best distributional robustness guarantee that keeps the loss of predictive accuracy in the training distribution within a pre-speciï¬ed bound. | 1710.11469#46 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 46 | qReLU activations on the more challenging ImageNet dataset. This is also shown in Figure 3, which plots the top-1 train and test accuracy curves for the six different activation functions for AlexNet on ImageNet. The left-hand plot shows that training sign activations with FTP-SH provides consistently better test accuracy than SSTE throughout the training trajectory, despite the hyperparameters being optimized for SSTE. This improvement is even larger for the 2-bit qReLU activation in the right- hand plot, where the FTP-SH qReLU even outperforms the full-precision ReLU for part of its trajectory, and outperforms the SSTE-trained qReLU by almost 2%. Interestingly, we ï¬nd that the saturated ReLU outperforms the standard ReLU by almost a full point of accuracy. We believe that this is due to the regularization effect caused by saturating the activation. This may also account for the surprisingly good performance of the FTP-SH qReLU relative to full-precision ReLU, as hard-threshold activations also provide a strong regularization effect. | 1710.11573#46 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 47 | As a caveat, the result takes the limit of small conditional variance of X style in the training distribution and small additional interventions. Under larger interventions higher- order terms could start to dominate, depending on the geometry of the loss function and fθ. A further caveat is that the result looks at the population CoRe estimator. For ï¬nite sample sizes, we would optimize a noisy version on the rhs of (13). To show domain shift robustness in an asymptotic sense, we would need additional uniform convergence (in θ) of both the empirical loss and the conditional variance in that for n â â,
sup |Em [â¬(Y, fo(X))] â Ero [â¬(Y, fo(X))]| 4p 0, and sup \Ce1/2,0 â Co,1/2,9| +p 0.
While this is in general a reasonable assumption to make, the validity of the assumption will depend on the speciï¬c function class and on the chosen estimator of the conditional variance.
# 5. Experiments
We perform an array of diï¬erent experiments, showing the applicability and advantage of the conditional variance penalty for two broad settings: | 1710.11469#47 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
1710.11573 | 47 | Finally, we ran a single experiment with ResNet-18 on ImageNet, using hyperparameters from previ- ous works that used SSTE, to check (i) whether the soft hinge loss exhibits vanishing gradient behavior due to its diminishing slope away from the origin, and (ii) to evaluate the performance of FTP-SH for a less-quantized ReLU (we used k = 5 steps, which is less than the full range of a 3-bit ReLU). While FTP-SH does slightly worse than SSTE for the sign function, we believe that this is because the hyper- parameters were tuned for SSTE and not due to vanishing gradients, as we would expect much worse accuracy in that case. Results from the qReLU activation provide further evidence against vanishing gradients as FTP-SH for qReLU outperforms SSTE by almost 4% in top-1 accuracy (Table 1).
# 5 CONCLUSION | 1710.11573#47 | Deep Learning as a Mixed Convex-Combinatorial Optimization Problem | As neural networks grow deeper and wider, learning networks with
hard-threshold activations is becoming increasingly important, both for network
quantization, which can drastically reduce time and energy requirements, and
for creating large integrated systems of deep networks, which may have
non-differentiable components and must avoid vanishing and exploding gradients
for effective learning. However, since gradient descent is not applicable to
hard-threshold functions, it is not clear how to learn networks of them in a
principled way. We address this problem by observing that setting targets for
hard-threshold hidden units in order to minimize loss is a discrete
optimization problem, and can be solved as such. The discrete optimization goal
is to find a set of targets such that each unit, including the output, has a
linearly separable problem to solve. Given these targets, the network
decomposes into individual perceptrons, which can then be learned with standard
convex approaches. Based on this, we develop a recursive mini-batch algorithm
for learning deep hard-threshold networks that includes the popular but poorly
justified straight-through estimator as a special case. Empirically, we show
that our algorithm improves classification accuracy in a number of settings,
including for AlexNet and ResNet-18 on ImageNet, when compared to the
straight-through estimator. | http://arxiv.org/pdf/1710.11573 | Abram L. Friesen, Pedro Domingos | cs.LG, cs.CV, cs.NE | 14 pages (9 body, 5 pages of references and appendices) | In Proceedings of the International Conference on Learning
Representations (ICLR) 2018 | cs.LG | 20171031 | 20180416 | [
{
"id": "1710.03740"
},
{
"id": "1606.06160"
},
{
"id": "1512.03385"
}
] |
1710.11469 | 48 | # 5. Experiments
We perform an array of diï¬erent experiments, showing the applicability and advantage of the conditional variance penalty for two broad settings:
1. Settings where we do not know what the style variables correspond to but still want to protect against a change in their distribution in the future. In the examples we show cases where the style variable ranges from fashion (§5.2), image quality (§5.3), movement (§5.4) and brightness (§5.7), which are all not known explicitly to the method. We also include genuinely unknown style variables in §5.1 (in the sense that they are unknown not only to the methods but also to us as we did not explicitly create the style interventions).
2. Settings where we do know what type of style interventions we would like to protect against. This is usually dealt with by data augmentation (adding images which are, say, rotated or shifted compared to the training data if we want to protect against rotations or translations in the test data; see for example Sch¨olkopf et al. (1996)). The conditional variance penalty is here exploiting that some augmented samples were generated from the same original sample and we use as ID variable the index
16 | 1710.11469#48 | Conditional Variance Penalties and Domain Shift Robustness | When training a deep neural network for image classification, one can broadly
distinguish between two types of latent features of images that will drive the
classification. We can divide latent features into (i) "core" or "conditionally
invariant" features $X^\text{core}$ whose distribution $X^\text{core}\vert Y$,
conditional on the class $Y$, does not change substantially across domains and
(ii) "style" features $X^{\text{style}}$ whose distribution $X^{\text{style}}
\vert Y$ can change substantially across domains. Examples for style features
include position, rotation, image quality or brightness but also more complex
ones like hair color, image quality or posture for images of persons. Our goal
is to minimize a loss that is robust under changes in the distribution of these
style features. In contrast to previous work, we assume that the domain itself
is not observed and hence a latent variable.
We do assume that we can sometimes observe a typically discrete identifier or
"$\mathrm{ID}$ variable". In some applications we know, for example, that two
images show the same person, and $\mathrm{ID}$ then refers to the identity of
the person. The proposed method requires only a small fraction of images to
have $\mathrm{ID}$ information. We group observations if they share the same
class and identifier $(Y,\mathrm{ID})=(y,\mathrm{id})$ and penalize the
conditional variance of the prediction or the loss if we condition on
$(Y,\mathrm{ID})$. Using a causal framework, this conditional variance
regularization (CoRe) is shown to protect asymptotically against shifts in the
distribution of the style variables. Empirically, we show that the CoRe penalty
improves predictive accuracy substantially in settings where domain changes
occur in terms of image quality, brightness and color while we also look at
more complex changes such as changes in movement and posture. | http://arxiv.org/pdf/1710.11469 | Christina Heinze-Deml, Nicolai Meinshausen | stat.ML, cs.LG | null | null | stat.ML | 20171031 | 20190413 | [
{
"id": "1801.06229"
},
{
"id": "1709.05321"
},
{
"id": "1710.10016"
},
{
"id": "1707.00600"
},
{
"id": "1805.12018"
},
{
"id": "1712.06050"
},
{
"id": "1810.12042"
},
{
"id": "1803.06373"
},
{
"id": "1707.09724"
},
{
"id": "1702.02604"
},
{
"id": "1807.10272"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.