doi
stringlengths 10
10
| chunk-id
int64 0
936
| chunk
stringlengths 401
2.02k
| id
stringlengths 12
14
| title
stringlengths 8
162
| summary
stringlengths 228
1.92k
| source
stringlengths 31
31
| authors
stringlengths 7
6.97k
| categories
stringlengths 5
107
| comment
stringlengths 4
398
⌀ | journal_ref
stringlengths 8
194
⌀ | primary_category
stringlengths 5
17
| published
stringlengths 8
8
| updated
stringlengths 8
8
| references
list |
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
1603.04467 | 61 | that the system uses a single, optimized dataï¬ow graph to represent the entire computation, and caches information about that graph on each device to minimize coordination overhead. Like Spark and Naiad, TensorFlow works best when there is sufï¬cient RAM in the cluster to hold the working set of the computation. Iteration in TensorFlow uses a hybrid approach: multiple replicas of the same dataï¬ow graph may be executing at once, while sharing the same set of variables. Replicas can share data asyn- chronously through the variables, or use synchronization mechanisms in the graph, such as queues, to operate syn- chronously. TensorFlow also supports iteration within a graph, which is a hybrid of CIEL and Naiad: for simplic- ity, each node ï¬res only when all of its inputs are ready (like CIEL); but for efï¬ciency the graph is represented as a static, cyclic dataï¬ow (like Naiad). | 1603.04467#61 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 62 | Like TensorFlow, several other distributed systems have been developed for executing dataï¬ow graphs across a cluster. Dryad [24] and Flume [8] demon- strate how a complex workï¬ow can be represented as a dataï¬ow graph. CIEL [37] and Naiad [36] introduce generic support for data-dependent control ï¬ow: CIEL represents iteration as a DAG that dynamically unfolds, whereas Naiad uses a static graph with cycles to support lower-latency iteration. Spark [55] is optimized for com- putations that access the same data repeatedly, using âre- silient distributed datasetsâ (RDDs), which are soft-state cached outputs of earlier computations. Dandelion [44] executes dataï¬ow graphs across a cluster of heteroge- neous devices, including GPUs. TensorFlow uses a hy- brid dataï¬ow model that borrows elements from each Its dataï¬ow scheduler, which is the of these systems. component that chooses the next node to execute, uses the same basic algorithm as Dryad, Flume, CIEL, and Spark. Its distributed architecture is closest to Naiad, in
# 12 Conclusions | 1603.04467#62 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 63 | # 12 Conclusions
We have described TensorFlow, a ï¬exible data ï¬ow- based programming model, as well as single machine and distributed implementations of this programming model. The system is borne from real-world experience in conducting research and deploying more than one hun- dred machine learning projects throughout a wide range of Google products and services. We have open sourced a version of TensorFlow, and hope that a vibrant shared community develops around the use of TensorFlow. We are excited to see how others outside of Google make use of TensorFlow in their own work.
16
# Acknowledgements
The development of TensorFlow has beneï¬tted enor- mously from the large and broad machine learning com- munity at Google, and in particular from the suggestions and contributions from rest of the Google Brain team and also from the hundreds of DistBelief and TensorFlow users within Google. Without a doubt, the usability and functionality of TensorFlow has been greatly expanded by listening to their feedback. | 1603.04467#63 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 64 | Many individuals have contributed to TensorFlow and to its open source release, including John Gian- nandrea (for creating a supportive research environ- ment), Irina Kofman and Phing Turner (project manage- ment), Bill Gruber and David Westbrook (technical writ- ing), Dave Andersen, Anelia Angelova, Yaroslav Bu- latov, Jianmin Chen, Jerjou Cheng, George Dahl, An- drew Dai, Lucy Gao, mig Gerard, Stephan Gouws, Naveen Kumar, Geoffrey Hinton, Mrinal Kalarishnan, Anjuli Kannan, Yutaka Leon-Suematsu, Frank Li, Pe- ter Liu, Xiaobing Liu, Nishant Patil, Pierre Sermanet, Noam Shazeer, Jascha Sohl-dickstein, Philip Tucker, Yonghui Wu, Ke Yang, and Cliff Young (general con- tributions), Doug Fritz, Patrick Hurst, Dilip Krish- nan, Daniel Smilkov, James Wexler, Jimbo Wilson, Kanit Ham Wongsuphasawat, Cassandra Xia, and the Big Picture team (graph visualization), Chris Leary, Robert Springer and the Stream Executor team, Kayur Patel, Michael Piatek, and the coLab team, and the many others who have contributed to the TensorFlow design and code base.
# References | 1603.04467#64 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 65 | # References
[1] Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghe- mawat, Ian Goodfellow, Andrew Harp, Geoffrey Irv- ing, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng. TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Soft- ware available from tensorï¬ow.org. | 1603.04467#65 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 66 | [2] Anelia Angelova, Alex Krizhevsky, and Vincent Van- houcke. Pedestrian detection with a large-ï¬eld-of-view deep network. In Robotics and Automation (ICRA), 2015 IEEE International Conference on, pages 704â711. IEEE, 2015. CalTech PDF.
[3] Arvind and David E. Culler. science vol. 1, of computer Annual 1986. review chapter
17
225â253. Dataï¬ow Architectures, www.dtic.mil/cgi-bin/GetTRDoc?Location=U2& doc=GetTRDoc.pdf&AD=ADA166235.
Executing a pro- gram on the MIT tagged-token dataï¬ow architec- IEEE Trans. Comput., 39(3):300â318, 1990. ture. dl.acm.org/citation.cfm?id=78583.
[5] Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. atten- 2014. Multiple tion. arxiv.org/abs/1412.7755. object arXiv recognition with preprint visual arXiv:1412.7755, | 1603.04467#66 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 67 | [6] Franc¸oise Beaufays. The neural behind Voice googleresearch.blogspot.com/2015/08/the-neural- networks-behind-google-voice.html. Google transcription, networks 2015.
[7] James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pas- cal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: A CPU and GPU math expression compiler. In Proceedings of the Python for scientiï¬c computing con- ference (SciPy), volume 4, page 3. Austin, TX, 2010. UMontreal PDF.
[8] Craig Chambers, Ashish Raniwala, Frances Perry, Stephen Adams, Robert R Henry, Robert Bradshaw, and Nathan Weizenbaum. easy, efï¬- In ACM Sigplan No- cient data-parallel pipelines. tices, volume 45, pages 363â375. ACM, 2010. re- search.google.com/pubs/archive/35650.pdf. | 1603.04467#67 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 68 | [9] Sharan Chetlur, Cliff Woolley, Philippe Vandermer- sch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cuDNN: Efï¬cient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014. arxiv.org/abs/1410.0759.
[10] Trishul Chilimbi, Yutaka Suzue, Johnson Apacible, and Project Adam: Building an Karthik Kalyanaraman. efï¬cient and scalable deep learning training system. In 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), pages 571â582, 2014. www.usenix.org/system/ï¬les/conference/osdi14/osdi14- paper-chilimbi.pdf.
lucrative web 2015. www.bloomberg.com/news/articles/2015-10-26/google- turning-its-lucrative-web-search-over-to-ai-machines. | 1603.04467#68 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 69 | [12] Cliff Click. Global code motion/global value number- ing. In ACM SIGPLAN Notices, volume 30, pages 246â 257. ACM, 1995. courses.cs.washington.edu/courses/ cse501/06wi/reading/click-pldi95.pdf.
[13] Ronan Collobert, Johnny Torch: A modular machine learning IDIAP, 2002. report, Samy Bengio, and Mari´ethoz. software library. infoscience.epï¬.ch/record/82802/ï¬les/rr02-46.pdf. Technical
[14] Jeffrey Dean, Gregory S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, MarcâAurelio Ranzato, Andrew Senior, Paul Tucker,
Ke Yang, and Andrew Y. Ng. Large scale distributed deep networks. In NIPS, 2012. Google Research PDF. | 1603.04467#69 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 70 | Ke Yang, and Andrew Y. Ng. Large scale distributed deep networks. In NIPS, 2012. Google Research PDF.
[15] Jack J Dongarra, Jeremy Du Croz, Sven Hammar- ling, and Iain S Duff. A set of level 3 basic lin- ACM Transactions on ear algebra subprograms. Mathematical Software (TOMS), 16(1):1â17, 1990. www.maths.manchester.ac.uk/Ësven/pubs/Level3BLAS- 1-TOMS16-90.pdf.
[16] Andrea Frome, Greg S Corrado, Jonathon Shlens, Jeff Dean, Tomas Mikolov, et al. embedding deep Information Pro- re- Samy Bengio, DeVISE: A model. cessing Systems, search.google.com/pubs/archive/41473.pdf. visual-semantic In Advances in Neural pages 2121â2129, 2013.
[17] Javier Gonzalez-Dominguez, Ignacio Lopez-Moreno, Pe- dro J Moreno, and Joaquin Gonzalez-Rodriguez. Frame- by-frame language identiï¬cation in short utterances using deep neural networks. Neural Networks, 64:49â58, 2015. | 1603.04467#70 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 71 | [18] Otavio Good. deep How Google a squeezes googleresearch.blogspot.com/2015/07/how-google- translate-squeezes-deep.html. learning onto phone, Translate 2015.
[19] Ian J. Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, and Vinay Shet. Multi-digit number recognition from Street View imagery using deep convolutional neu- In International Conference on Learning ral networks. Representations, 2014. arxiv.org/pdf/1312.6082.
[20] Georg Heigold, Vincent Vanhoucke, Alan Senior, Patrick Nguyen, MarcâAurelio Ranzato, Matthieu Devin, and Jeffrey Dean. Multilingual acoustic models using dis- In Acoustics, Speech tributed deep neural networks. and Signal Processing (ICASSP), 2013 IEEE Interna- tional Conference on, pages 8619â8623. IEEE, 2013. re- search.google.com/pubs/archive/40807.pdf. | 1603.04467#71 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 72 | [21] Geoffrey E. Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, An- drew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, Deep for acoustic modeling in speech neural networks research recognition: IEEE Signal Process. Mag., 29(6):82â groups. 97, www.cs.toronto.edu/Ëgdahl/papers/ deepSpeechReviewSPM2012.pdf.
[22] Sepp Hochreiter and J¨urgen Schmidhuber. Long short- term memory. Neural computation, 9(8):1735â1780, 1997. ftp.idsia.ch/pub/juergen/lstm.pdf.
[23] Sergey Ioffe and Christian Szegedy. Batch normaliza- tion: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. arxiv.org/abs/1502.03167. | 1603.04467#72 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 73 | [24] Michael Isard, Mihai Budiu, Yuan Yu, Andrew distributed building In ACM SIGOPS Operating Systems pages 59â72. ACM, 2007. Birrell, and Dennis Fetterly. data-parallel blocks. Review, www.michaelisard.com/pubs/eurosys07.pdf. Dryad: programs from sequential volume 41,
18
[25] BenoËıt Jacob, Ga¨el Guennebaud, et al. Eigen library for linear algebra. eigen.tuxfamily.org.
[26] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar- rama, and Trevor Darrell. Caffe: Convolutional archi- In Proceedings of tecture for fast feature embedding. the ACM International Conference on Multimedia, pages 675â678. ACM, 2014. arxiv.org/pdf/1408.5093. | 1603.04467#73 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 74 | [27] Andrej Karpathy, George Toderici, Sachin Shetty, and Li Fei- Tommy Leung, Rahul Sukthankar, Large-scale video classiï¬cation with con- Fei. In Computer Vision volutional neural networks. and Pattern Recognition (CVPR), 2014 IEEE Con- ference on, pages 1725â1732. re- search.google.com/pubs/archive/42455.pdf.
[28] A Krizhevsky. Cuda-convnet, 2014. code.google.com/p/cuda-convnet/.
[29] Alex Krizhevsky. One weird trick for paralleliz- arXiv preprint ing convolutional neural networks. arXiv:1404.5997, 2014. arxiv.org/abs/1404.5997.
[30] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 dataset. www.cs.toronto.edu/Ëkriz/cifar.html.
[31] Quoc Le, MarcâAurelio Ranzato, Rajat Monga, Matthieu Devin, Greg Corrado, Kai Chen, Jeff Dean, and Andrew Ng. Building high-level features using large scale unsu- pervised learning. In ICMLâ2012, 2012. Google Research PDF. | 1603.04467#74 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 75 | [32] Yann LeCun, Corinna Cortes, and Christopher JC Burges. The MNIST database of handwritten digits, 1998. yann.lecun.com/exdb/mnist/.
[33] Mu Li, Dave Andersen, and Alex Smola. Parameter server. parameterserver.org.
[34] Chris J Maddison, Aja Huang, Ilya Sutskever, and David Silver. Move evaluation in Go using deep convolutional neural networks. arXiv preprint arXiv:1412.6564, 2014. arxiv.org/abs/1412.6564.
[35] Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- Efï¬cient estimation of word representa- frey Dean. In International Conference tions in vector space. on Learning Representations: Workshops Track, 2013. arxiv.org/abs/1301.3781.
[36] Derek G Murray, Frank McSherry, Rebecca Isaacs, Michael Isard, Paul Barham, and Mart´ın Abadi. Naiad: a timely dataï¬ow system. In Proceedings of the Twenty- Fourth ACM Symposium on Operating Systems Princi- ples, pages 439â455. ACM, 2013. Microsoft Research PDF. | 1603.04467#75 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 76 | [37] Derek G. Murray, Malte Schwarzkopf, Christopher Smowton, Steven Smit, Anil Madhavapeddy, and Steven a universal execution engine for dis- Hand. tributed data-ï¬ow computing. In Proceedings of the Ninth USENIX Symposium on Networked Systems Design and Implementation, 2011. Usenix PDF.
[38] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Ve- davyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, et al. Massively parallel meth- arXiv preprint ods for deep reinforcement learning. arXiv:1507.04296, 2015. arxiv.org/abs/1507.04296.
[39] CUDA Nvidia. Cublas library. NVIDIA Corpo- devel- ration, Santa Clara, California, 15, 2008. oper.nvidia.com/cublas. | 1603.04467#76 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 77 | [39] CUDA Nvidia. Cublas library. NVIDIA Corpo- devel- ration, Santa Clara, California, 15, 2008. oper.nvidia.com/cublas.
[40] Jonathan Ragan-Kelley, Connelly Barnes, Andrew Adams, Sylvain Paris, Fr´edo Durand, and Saman Ama- rasinghe. Halide: A language and compiler for optimiz- ing parallelism, locality, and recomputation in image pro- cessing pipelines. ACM SIGPLAN Notices, 48(6):519â 530, people.csail.mit.edu/fredo/tmp/Halide- 5min.pdf.
[41] Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, and Vijay Pande. Massively multitask networks for drug discovery. arXiv preprint arXiv:1502.02072, 2015. arxiv.org/abs/1502.02072. | 1603.04467#77 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 78 | [42] Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu. Hogwild: A lock-free approach to paral- In Advances in lelizing stochastic gradient descent. Neural Information Processing Systems, pages 693â701, 2011. papers.nips.cc/paper/4390-hogwild-a-lock-free- approach-to-parallelizing-stochastic-gradient-descent.
[43] Chuck Rosenberg. across step Improving Photo Search: 2013. A the googleresearch.blogspot.com/2013/06/improving- photo-search-step-across.html. semantic gap,
[44] Christopher J Rossbach, Yuan Yu, Jon Currey, Jean- Philippe Martin, and Dennis Fetterly. Dandelion: a compiler and runtime for heterogeneous systems. In Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles, pages 49â68. ACM, 2013. research-srv.microsoft.com/pubs/201110/sosp13- dandelion-ï¬nal.pdf.
[45] David E Rumelhart, Geoffrey E Hinton, and Ronald J back- Cognitive modeling, 5:3, 1988. Williams. propagating errors. www.cs.toronto.edu/ hinton/absps/naturebp.pdf. Learning representations by | 1603.04467#78 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 79 | Kanishka Rao, Franc¸oise Beaufays, and Johan Schalkwyk. Google Voice Search: 2015. googleresearch.blogspot.com/2015/09/google-voice- search-faster-and-more.html.
[47] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence In NIPS, to sequence learning with neural networks. 2014. papers.nips.cc/paper/5346-sequence-to-sequence- learning-with-neural.
[48] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Ser- manet, Scott Reed, Dragomir Anguelov, Dumitru Er- han, Vincent Vanhoucke, and Andrew Rabinovich. Go- In CVPRâ2015, 2015. ing deeper with convolutions. arxiv.org/abs/1409.4842.
19
[49] Seiya Tokui. Chainer: A powerful, ï¬exible and intuitive framework of neural networks. chainer.org.
[50] Vincent Vanhoucke. Speech recognition and deep learn- ing, 2015. googleresearch.blogspot.com/2012/08/speech- recognition-and-deep-learning.html. | 1603.04467#79 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 80 | [51] Abhishek Verma, Luis Pedrosa, Madhukar Korupolu, David Oppenheimer, Eric Tune, and John Wilkes. Large-scale cluster management at Google with Borg. the Tenth European Conference In Proceedings of on Computer Systems, page 18. ACM, 2015. re- search.google.com/pubs/archive/43438.pdf.
[52] O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton. Grammar as a foreign language. Technical report, arXiv:1412.7449, 2014. arxiv.org/abs/1412.7449.
[53] Oriol Vinyals, Meire Fortunato, Jaitly. arxiv.org/abs/1506.03134. Pointer networks. and Navdeep In NIPS, 2015. | 1603.04467#80 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.04467 | 81 | [54] Dong Yu, Adam Eversole, Mike Seltzer, Kaisheng Yao, Zhiheng Huang, Brian Guenter, Oleksii Kuchaiev, Yu Zhang, Frank Seide, Huaming Wang, et al. An introduction to computational networks and the com- Technical report, Tech. putational network toolkit. Rep. MSR, Microsoft Research, 2014, 2014. re- search.microsoft.com/apps/pubs/?id=226641.
[55] Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J Franklin, Scott Shenker, and Ion Stoica. Resilient distributed datasets: A fault-tolerant abstraction for In Proceedings of the in-memory cluster computing. 9th USENIX conference on Networked Systems De- sign and Implementation. USENIX Association, 2012. www.usenix.org/system/ï¬les/conference/nsdi12/nsdi12- ï¬nal138.pdf. | 1603.04467#81 | TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems | TensorFlow is an interface for expressing machine learning algorithms, and an
implementation for executing such algorithms. A computation expressed using
TensorFlow can be executed with little or no change on a wide variety of
heterogeneous systems, ranging from mobile devices such as phones and tablets
up to large-scale distributed systems of hundreds of machines and thousands of
computational devices such as GPU cards. The system is flexible and can be used
to express a wide variety of algorithms, including training and inference
algorithms for deep neural network models, and it has been used for conducting
research and for deploying machine learning systems into production across more
than a dozen areas of computer science and other fields, including speech
recognition, computer vision, robotics, information retrieval, natural language
processing, geographic information extraction, and computational drug
discovery. This paper describes the TensorFlow interface and an implementation
of that interface that we have built at Google. The TensorFlow API and a
reference implementation were released as an open-source package under the
Apache 2.0 license in November, 2015 and are available at www.tensorflow.org. | http://arxiv.org/pdf/1603.04467 | Martín Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghemawat, Ian Goodfellow, Andrew Harp, Geoffrey Irving, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Mane, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Viegas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, Xiaoqiang Zheng | cs.DC, cs.LG | Version 2 updates only the metadata, to correct the formatting of
Mart\'in Abadi's name | null | cs.DC | 20160314 | 20160316 | [
{
"id": "1502.02072"
},
{
"id": "1507.04296"
}
] |
1603.01360 | 1 | # Abstract
State-of-the-art named entity recognition sys- tems rely heavily on hand-crafted features and domain-speciï¬c knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architecturesâone based on bidirectional LSTMs and conditional random ï¬elds, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of infor- mation about words: character-based word representations learned from the supervised corpus and unsupervised word representa- tions learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-speciï¬c knowledge or resources such as gazetteers. 1
1
# 1 Introduction | 1603.01360#1 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 2 | 1
# 1 Introduction
Named entity recognition (NER) is a challenging learning problem. One the one hand, in most lan- there is only a very small guages and domains, amount of supervised training data available. On the other, there are few constraints on the kinds of words that can be names, so generalizing from this small sample of data is difï¬cult. As a result, carefully con- structed orthographic features and language-speciï¬c knowledge resources, such as gazetteers, are widely used for solving this task. Unfortunately, language- speciï¬c resources and features are costly to de- velop in new languages and new domains, making NER a challenge to adapt. Unsupervised learning
from unannotated corpora offers an alternative strat- egy for obtaining better generalization from small amounts of supervision. However, even systems that have relied extensively on unsupervised fea- tures (Collobert et al., 2011; Turian et al., 2010; Lin and Wu, 2009; Ando and Zhang, 2005b, in- ter alia) have used these to augment, rather than replace, hand-engineered features (e.g., knowledge about capitalization patterns and character classes in a particular language) and specialized knowledge re- sources (e.g., gazetteers). | 1603.01360#2 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 3 | In this paper, we present neural architectures for NER that use no language-speciï¬c resources or features beyond a small amount of supervised training data and unlabeled corpora. Our mod- els are designed to capture two intuitions. First, since names often consist of multiple tokens, rea- soning jointly over tagging decisions for each to- ken is important. We compare two models here, (i) a bidirectional LSTM with a sequential condi- tional random layer above it (LSTM-CRF; §2), and (ii) a new model that constructs and labels chunks of input sentences using an algorithm inspired by transition-based parsing with states represented by stack LSTMs (S-LSTM; §3). Second, token-level evidence for âbeing a nameâ includes both ortho- graphic evidence (what does the word being tagged as a name look like?) and distributional evidence (where does the word being tagged tend to oc- cur in a corpus?). To capture orthographic sen- sitivity, we use character-based word representa- tion model (Ling et al., 2015b) to capture distribu- tional sensitivity, we combine these representations with distributional representations | 1603.01360#3 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 5 | 1The code of the LSTM-CRF and Stack-LSTM NER https://github.com/ systems at glample/tagger and https://github.com/clab/ stack-lstm-ner
Experiments in English, Dutch, German, and Spanish show that we are able to obtain stateof-the-art NER performance with the LSTM-CRF model in Dutch, German, and Spanish, and very near the state-of-the-art in English without any hand-engineered features or gazetteers (§5). The transition-based algorithm likewise surpasses the best previously published results in several lan- guages, although it performs less well than the LSTM-CRF model.
# 2 LSTM-CRF Model
We provide a brief description of LSTMs and CRFs, and present a hybrid tagging architecture. This ar- chitecture is similar to the ones presented by Col- lobert et al. (2011) and Huang et al. (2015).
# 2.1 LSTM | 1603.01360#5 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 6 | # 2.1 LSTM
Recurrent neural networks (RNNs) are a family of neural networks that operate on sequential data. They take as input a sequence of vectors (x1, x2, . . . , xn) sequence (h1, h2, . . . , hn) that represents some information about the sequence at every step in the input. Although RNNs can, in theory, learn long depen- dencies, in practice they fail to do so and tend to be biased towards their most recent inputs in the sequence (Bengio et al., 1994). Long Short-term Memory Networks (LSTMs) have been designed to combat this issue by incorporating a memory-cell and have been shown to capture long-range depen- dencies. They do so using several gates that control the proportion of the input to give to the memory cell, and the proportion from the previous state to forget (Hochreiter and Schmidhuber, 1997). We use the following implementation:
ip = o(Waixt + Washe-1 4 ec = (1â i) © eit Weicr-1 4 bi)
i, © tanh(Waext + Warchiâ1 + be) 04 = 0(Waoxt + Wroliâ1 + Weotr + bo) hy = 0; © tanh(cz), | 1603.01360#6 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 7 | where o is the element-wise sigmoid function, and © is the element-wise product.
For a given sentence (x1, x2, . . . , xn) containing n words, each represented as a d-dimensional vector, ââ ht of the left an LSTM computes a representation
context of the sentence at every word t. Naturally, ââ ht generating a representation of the right context as well should add useful information. This can be achieved using a second LSTM that reads the same sequence in reverse. We will refer to the former as the forward LSTM and the latter as the backward LSTM. These are two distinct networks with differ- ent parameters. This forward and backward LSTM pair is referred to as a bidirectional LSTM (Graves and Schmidhuber, 2005).
The representation of a word using this model is obtained by concatenating its left and right context ââ ââ representations, ht = [ ht]. These representa- ht; tions effectively include a representation of a word in context, which is useful for numerous tagging ap- plications.
# 2.2 CRF Tagging Models | 1603.01360#7 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 8 | # 2.2 CRF Tagging Models
A very simpleâbut surprisingly effectiveâtagging model is to use the htâs as features to make indepen- dent tagging decisions for each output yt (Ling et al., 2015b). Despite this modelâs success in simple problems like POS tagging, its independent classiï¬- cation decisions are limiting when there are strong dependencies across output labels. NER is one such task, since the âgrammarâ that characterizes inter- pretable sequences of tags imposes several hard con- straints (e.g., I-PER cannot follow B-LOC; see §2.4 for details) that would be impossible to model with independence assumptions.
Therefore, instead of modeling tagging decisions independently, we model them jointly using a con- ditional random ï¬eld (Lafferty et al., 2001). For an input sentence
X = (x1, x2, . . . , xn),
we consider P to be the matrix of scores output by the bidirectional LSTM network. P is of size n à k, where k is the number of distinct tags, and Pi,j cor- responds to the score of the jth tag of the ith word in a sentence. For a sequence of predictions
y = (y1, y2, . . . , yn), | 1603.01360#8 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 9 | y = (y1, y2, . . . , yn),
we deï¬ne its score to be
n n s(X,y) = S- Ay. yigs + > Pry: i=0 i=1
where A is a matrix of transition scores such that Ai,j represents the score of a transition from the tag i to tag j. y0 and yn are the start and end tags of a sentence, that we add to the set of possi- ble tags. A is therefore a square matrix of size k +2.
A softmax over all possible tag sequences yields a probability for the sequence y:
es(&y) Vyerx es 9)â P(y|X) =
During training, we maximize the log-probability of the correct tag sequence:
log(p(y|X)) = s(X,y) âlog { $2 es) yeYx = 5(X,y) â logadd s(XÂ¥), () yeYx
where YX represents all possible tag sequences (even those that do not verify the IOB format) for a sentence X. From the formulation above, it is ev- ident that we encourage our network to produce a valid sequence of output labels. While decoding, we predict the output sequence that obtains the maxi- mum score given by: | 1603.01360#9 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 10 | y* = argmax s(X,y). (2) yeYx
Since we are only modeling bigram interactions between outputs, both the summation in Eq. 1 and the maximum a posteriori sequence yâ in Eq. 2 can be computed using dynamic programming.
# 2.3 Parameterization and Training
The scores associated with each tagging decision for each token (i.e., the P;âs) are defined to be the dot product between the embedding of a word- in-context computed with a bidirectional LSTMâ exactly the same as the POS tagging model of and these are combined with bigram compatibility scores (i.e., the Ay y's). This archi- tecture is shown in figure [I] Circles represent ob- served variables, diamonds are deterministic func- tions of their parents, and double circles are random variables.
CRF Layer 4 Bi-LSTM encoder Word embeddings
Figure 1: Main architecture of the network. Word embeddings are given to a bidirectional LSTM. li represents the word i and its left context, ri represents the word i and its right context. Concatenating these two vectors yields a representation of the word i in its context, ci. | 1603.01360#10 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 11 | The parameters of this model are thus the matrix of bigram compatibility scores A, and the parame- ters that give rise to the matrix P, namely the param- eters of the bidirectional LSTM, the linear feature weights, and the word embeddings. As in part 2.2, let xi denote the sequence of word embeddings for every word in a sentence, and yi be their associated tags. We return to a discussion of how the embed- dings xi are modeled in Section 4. The sequence of word embeddings is given as input to a bidirectional LSTM, which returns a representation of the left and right context for each word as explained in 2.1.
These representations are concatenated (ci) and linearly projected onto a layer whose size is equal to the number of distinct tags. Instead of using the softmax output from this layer, we use a CRF as pre- viously described to take into account neighboring tags, yielding the ï¬nal predictions for every word yi. Additionally, we observed that adding a hidden layer between ci and the CRF layer marginally im- proved our results. All results reported with this model incorporate this extra-layer. The parameters are trained to maximize Eq. 1 of observed sequences of NER tags in an annotated corpus, given the ob- served words.
# 2.4 Tagging Schemes | 1603.01360#11 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 12 | # 2.4 Tagging Schemes
The task of named entity recognition is to assign a named entity label to every word in a sentence. A single named entity could span several tokens within a sentence. Sentences are usually represented in the IOB format (Inside, Outside, Beginning) where ev- ery token is labeled as B-label if the token is the beginning of a named entity, I-label if it is inside a named entity but not the ï¬rst token within the named entity, or O otherwise. However, we de- cided to use the IOBES tagging scheme, a variant of IOB commonly used for named entity recognition, which encodes information about singleton entities (S) and explicitly marks the end of named entities (E). Using this scheme, tagging a word as I-label with high-conï¬dence narrows down the choices for the subsequent word to I-label or E-label, however, the IOB scheme is only capable of determining that the subsequent word cannot be the interior of an- other label. Ratinov and Roth (2009) and Dai et al. (2015) showed that using a more expressive tagging scheme like IOBES improves model performance marginally. However, we did not observe a signif- icant improvement over the IOB tagging scheme.
# 3 Transition-Based Chunking Model | 1603.01360#12 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 13 | # 3 Transition-Based Chunking Model
As an alternative to the LSTM-CRF discussed in the previous section, we explore a new architecture that chunks and labels a sequence of inputs using an algorithm similar to transition-based dependency parsing. This model directly constructs representa- tions of the multi-token names (e.g., the name Mark Watney is composed into a single representation).
This model relies on a stack data structure to in- crementally construct chunks of the input. To ob- tain representations of this stack used for predict- ing subsequent actions, we use the Stack-LSTM pre- sented by Dyer et al. (2015), in which the LSTM is augmented with a âstack pointer.â While sequen- tial LSTMs model sequences from left to right, stack LSTMs permit embedding of a stack of objects that are both added to (using a push operation) and re- moved from (using a pop operation). This allows the Stack-LSTM to work like a stack that maintains a âsummary embeddingâ of its contents. We refer to this model as Stack-LSTM or S-LSTM model for simplicity. | 1603.01360#13 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 14 | Finally, we refer interested readers to the original paper (Dyer et al., 2015) for details about the Stack- LSTM model since in this paper we merely use the same architecture through a new transition-based al- gorithm presented in the following Section.
# 3.1 Chunking Algorithm
We designed a transition inventory which is given in Figure 2 that is inspired by transition-based parsers, in particular the arc-standard parser of Nivre (2004). In this algorithm, we make use of two stacks (des- ignated output and stack representing, respectively, completed chunks and scratch space) and a buffer that contains the words that have yet to be processed. The transition inventory contains the following tran- sitions: The SHIFT transition moves a word from the buffer to the stack, the OUT transition moves a word from the buffer directly into the output stack while the REDUCE(y) transition pops all items from the top of the stack creating a âchunk,â labels this with label y, and pushes a representation of this chunk onto the output stack. The algorithm com- pletes when the stack and buffer are both empty. The algorithm is depicted in Figure 2, which shows the sequence of operations required to process the sen- tence Mark Watney visited Mars. | 1603.01360#14 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 15 | The model is parameterized by deï¬ning a prob- ability distribution over actions at each time step, given the current contents of the stack, buffer, and output, as well as the history of actions taken. Fol- lowing Dyer et al. (2015), we use stack LSTMs to compute a ï¬xed dimensional embedding of each of these, and take a concatenation of these to ob- tain the full algorithm state. This representation is used to deï¬ne a distribution over the possible ac- tions that can be taken at each time step. The model is trained to maximize the conditional probability of sequences of reference actions (extracted from a la- beled training corpus) given the input sentences. To label a new input sequence at test time, the maxi- mum probability action is chosen greedily until the algorithm reaches a termination state. Although this is not guaranteed to ï¬nd a global optimum, it is ef- fective in practice. Since each token is either moved directly to the output (1 action) or ï¬rst to the stack and then the output (2 actions), the total number of actions for a sequence of length n is maximally 2n. It is worth noting that the nature of this algorithm | 1603.01360#15 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 16 | Outt O O O Stackt S (u, u), . . . , (v, v), S B S Action Buffert (u, u), B SHIFT REDUCE(y) (u, u), B OUT Outt+1 O g(u, . . . , v, ry), O S S g(u, râ
), O Stackt+1 Buffert+1 (u, u), S B B B Segments â (u . . . v, y) â
Figure 2: Transitions of the Stack-LSTM model indicating the action applied and the resulting state. Bold symbols indicate (learned) embeddings of words and relations, script symbols indicate the corresponding words and relations.
Transition SHIFT SHIFT REDUCE(PER) OUT SHIFT REDUCE(LOC) Output [] [] [] [(Mark Watney)-PER] [(Mark Watney)-PER, visited] [(Mark Watney)-PER, visited] [(Mark Watney)-PER, visited, (Mars)-LOC] Stack [] [Mark] [Mark, Watney] [] [] [Mars] [] Buffer [Mark, Watney, visited, Mars] [Watney, visited, Mars] [visited, Mars] [visited, Mars] [Mars] [] [] Segment (Mark Watney)-PER (Mars)-LOC | 1603.01360#16 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 17 | Figure 3: Transition sequence for Mark Watney visited Mars with the Stack-LSTM model.
model makes it agnostic to the tagging scheme used since it directly predicts labeled chunks.
# 3.2 Representing Labeled Chunks
When the REDUCE(y) operation is executed, the al- gorithm shifts a sequence of tokens (together with their vector embeddings) from the stack to the out- put buffer as a single completed chunk. To compute an embedding of this sequence, we run a bidirec- tional LSTM over the embeddings of its constituent tokens together with a token representing the type of the chunk being identiï¬ed (i.e., y). This function is given as g(u, . . . , v, ry), where ry is a learned em- bedding of a label type. Thus, the output buffer con- tains a single vector representation for each labeled chunk that is generated, regardless of its length.
Embedding from Lookup tabl lookup table ookup table t © © Embedding from characters Concatenation - Final embedding Tars
# Input Word Embeddings | 1603.01360#17 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 18 | Embedding from Lookup tabl lookup table ookup table t © © Embedding from characters Concatenation - Final embedding Tars
# Input Word Embeddings
The input layers to both of our models are vector representations of individual words. Learning inde- pendent representations for word types from the lim- ited NER training data is a difï¬cult problem: there are simply too many parameters to reliably estimate. Since many languages have orthographic or mor- phological evidence that something is a name (or not a name), we want representations that are sen- sitive to the spelling of words. We therefore use a model that constructs representations of words from representations of the characters they are composed of (4.1). Our second intuition is that names, which may individually be quite varied, appear in regular contexts in large corpora. Therefore we use embedFigure 4: The character embeddings of the word âMarsâ are given to a bidirectional LSTMs. We concatenate their last out- puts to an embedding from a lookup table to obtain a represen- tation for this word.
dings learned from a large corpus that are sensitive to word order (4.2). Finally, to prevent the models from depending on one representation or the other too strongly, we use dropout training and ï¬nd this is crucial for good generalization performance (4.3). | 1603.01360#18 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 19 | # 4.1 Character-based models of words
An important distinction of our work from most previous approaches is that we learn character-level
features while training instead of hand-engineering preï¬x and sufï¬x information about words. Learn- ing character-level embeddings has the advantage of learning representations speciï¬c to the task and do- main at hand. They have been found useful for mor- phologically rich languages and to handle the out- of-vocabulary problem for tasks like part-of-speech tagging and language modeling (Ling et al., 2015b) or dependency parsing (Ballesteros et al., 2015). | 1603.01360#19 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 20 | Figure 4 describes our architecture to generate a word embedding for a word from its characters. A character lookup table initialized at random contains an embedding for every character. The character embeddings corresponding to every character in a word are given in direct and reverse order to a for- ward and a backward LSTM. The embedding for a word derived from its characters is the concatenation of its forward and backward representations from the bidirectional LSTM. This character-level repre- sentation is then concatenated with a word-level rep- resentation from a word lookup-table. During test- ing, words that do not have an embedding in the lookup table are mapped to a UNK embedding. To train the UNK embedding, we replace singletons with the UNK embedding with a probability 0.5. In all our experiments, the hidden dimension of the for- ward and backward character LSTMs are 25 each, which results in our character-based representation of words being of dimension 50. | 1603.01360#20 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 21 | Recurrent models like RNNs and LSTMs are ca- pable of encoding very long sequences, however, they have a representation biased towards their most recent inputs. As a result, we expect the ï¬nal rep- resentation of the forward LSTM to be an accurate representation of the sufï¬x of the word, and the ï¬- nal state of the backward LSTM to be a better rep- resentation of its preï¬x. Alternative approachesâ most notably like convolutional networksâhave been proposed to learn representations of words from their characters (Zhang et al., 2015; Kim et al., 2015). However, convnets are designed to discover position-invariant features of their inputs. While this is appropriate for many problems, e.g., image recog- nition (a cat can appear anywhere in a picture), we argue that important information is position depen- dent (e.g., preï¬xes and sufï¬xes encode different in- formation than stems), making LSTMs an a priori better function class for modeling the relationship
between words and their characters.
# 4.2 Pretrained embeddings | 1603.01360#21 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 22 | between words and their characters.
# 4.2 Pretrained embeddings
As in Collobert et al. (2011), we use pretrained word embeddings to initialize our lookup table. We observe signiï¬cant improvements using pretrained word embeddings over randomly initialized ones. Embeddings are pretrained using skip-n-gram (Ling et al., 2015a), a variation of word2vec (Mikolov et al., 2013a) that accounts for word order. These em- beddings are ï¬ne-tuned during training.
Word embeddings for Spanish, Dutch, German and English are trained using the Spanish Gigaword version 3, the Leipzig corpora collection, the Ger- man monolingual training data from the 2010 Ma- chine Translation Workshop and the English Giga- word version 4 (with the LA Times and NY Times portions removed) respectively.2 We use an embed- ding dimension of 100 for English, 64 for other lan- guages, a minimum word frequency cutoff of 4, and a window size of 8.
# 4.3 Dropout training | 1603.01360#22 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 23 | # 4.3 Dropout training
Initial experiments showed that character-level em- beddings did not improve our overall performance when used in conjunction with pretrained word rep- resentations. To encourage the model to depend on both representations, we use dropout training (Hin- ton et al., 2012), applying a dropout mask to the ï¬nal embedding layer just before the input to the bidirec- tional LSTM in Figure 1. We observe a signiï¬cant improvement in our modelâs performance after us- ing dropout (see table 5).
# 5 Experiments
This section presents the methods we use to train our models, the results we obtained on various tasks and the impact of our networksâ conï¬guration on model performance.
# 5.1 Training
For both models presented, we train our networks using the back-propagation algorithm updating our parameters on every training example, one at a time, using stochastic gradient descent (SGD) with
2(Graff, 2011; Biemann et al., 2007; Callison-Burch et al., 2010; Parker et al., 2009) | 1603.01360#23 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 24 | 2(Graff, 2011; Biemann et al., 2007; Callison-Burch et al., 2010; Parker et al., 2009)
a learning rate of 0.01 and a gradient clipping of 5.0. Several methods have been proposed to enhance the performance of SGD, such as Adadelta (Zeiler, 2012) or Adam (Kingma and Ba, 2014). Although we observe faster convergence using these methods, none of them perform as well as SGD with gradient clipping.
Our LSTM-CRF model uses a single layer for the forward and backward LSTMs whose dimen- sions are set to 100. Tuning this dimension did not signiï¬cantly impact model performance. We set the dropout rate to 0.5. Using higher rates nega- tively impacted our results, while smaller rates led to longer training time.
The stack-LSTM model uses two layers each of dimension 100 for each stack. The embeddings of the actions used in the composition functions have 16 dimensions each, and the output embedding is of dimension 20. We experimented with different dropout rates and reported the scores using the best dropout rate for each language.3 It is a greedy model that apply locally optimal actions until the entire sentence is processed, further improvements might be obtained with beam search (Zhang and Clark, 2011) or training with exploration (Ballesteros et al., 2016). | 1603.01360#24 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 25 | # 5.2 Data Sets
We test our model on different datasets for named entity recognition. To demonstrate our modelâs ability to generalize to different languages, we present results on the CoNLL-2002 and CoNLL- 2003 datasets (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) that contain in- dependent named entity labels for English, Span- ish, German and Dutch. All datasets contain four different types of named entities: locations, per- sons, organizations, and miscellaneous entities that do not belong in any of the three previous cate- gories. Although POS tags were made available for all datasets, we did not include them in our models. We did not perform any dataset preprocessing, apart from replacing every digit with a zero in the English NER dataset.
3English (D=0.2), German, Spanish and Dutch (D=0.3)
# 5.3 Results | 1603.01360#25 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 26 | 3English (D=0.2), German, Spanish and Dutch (D=0.3)
# 5.3 Results
Table 1 presents our comparisons with other mod- els for named entity recognition in English. To make the comparison between our model and oth- ers fair, we report the scores of other models with and without the use of external labeled data such as gazetteers and knowledge bases. Our models do not use gazetteers or any external labeled resources. The best score reported on this task is by Luo et al. (2015). They obtained a F1 of 91.2 by jointly model- ing the NER and entity linking tasks (Hoffart et al., 2011). Their model uses a lot of hand-engineered features including spelling features, WordNet clus- ters, Brown clusters, POS tags, chunks tags, as well as stemming and external knowledge bases like Freebase and Wikipedia. Our LSTM-CRF model outperforms all other systems, including the ones us- ing external labeled data like gazetteers. Our Stack- LSTM model also outperforms all previous models that do not incorporate external features, apart from the one presented by Chiu and Nichols (2015). | 1603.01360#26 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 27 | Tables 2, 3 and 4 present our results on NER for German, Dutch and Spanish respectively in compar- ison to other models. On these three languages, the LSTM-CRF model signiï¬cantly outperforms all pre- vious methods, including the ones using external la- beled data. The only exception is Dutch, where the model of Gillick et al. (2015) can perform better by leveraging the information from other NER datasets. The Stack-LSTM also consistently presents state- the-art (or close to) results compared to systems that do not use external data.
As we can see in the tables, the Stack-LSTM model is more dependent on character-based repre- sentations to achieve competitive performance; we hypothesize that the LSTM-CRF model requires less orthographic information since it gets more contex- tual information out of the bidirectional LSTMs; however, the Stack-LSTM model consumes the words one by one and it just relies on the word rep- resentations when it chunks words.
# 5.4 Network architectures
Our models had several components that we could tweak to understand their impact on the overall per- formance. We explored the impact that the CRF, the character-level representations, pretraining of our | 1603.01360#27 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 28 | Our models had several components that we could tweak to understand their impact on the overall per- formance. We explored the impact that the CRF, the character-level representations, pretraining of our
Model Collobert et al. (2011)* Lin and Wu (2009) Lin and Wu (2009)* Huang et al. (2015)* Passos et al. (2014) Passos et al. (2014)* Luo et al. (2015)* + gaz Luo et al. (2015)* + gaz + linking Chiu and Nichols (2015) Chiu and Nichols (2015)* LSTM-CRF (no char) LSTM-CRF S-LSTM (no char) S-LSTM F1 89.59 83.78 90.90 90.10 90.05 90.90 89.9 91.2 90.69 90.77 90.20 90.94 87.96 90.33
Table 1: English NER results (CoNLL-2003 test set). * indi- cates models trained with the use of external labeled data | 1603.01360#28 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 29 | Table 1: English NER results (CoNLL-2003 test set). * indi- cates models trained with the use of external labeled data
Model Florian et al. (2003)* Ando and Zhang (2005a) Qi et al. (2009) Gillick et al. (2015) Gillick et al. (2015)* LSTM-CRF â no char LSTM-CRF S-LSTM â no char S-LSTM F1 72.41 75.27 75.72 72.08 76.22 75.06 78.76 65.87 75.66
Table 2: German NER results (CoNLL-2003 test set). * indicates models trained with the use of external labeled data
Model Carreras et al. (2002) Nothman et al. (2013) Gillick et al. (2015) Gillick et al. (2015)* LSTM-CRF â no char LSTM-CRF S-LSTM â no char S-LSTM F1 77.05 78.6 78.08 82.84 73.14 81.74 69.90 79.88
Table 3: Dutch NER (CoNLL-2002 test set). * indicates mod- els trained with the use of external labeled data | 1603.01360#29 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 30 | Table 3: Dutch NER (CoNLL-2002 test set). * indicates mod- els trained with the use of external labeled data
Model Carreras et al. (2002)* Santos and GuimarËaes (2015) Gillick et al. (2015) Gillick et al. (2015)* LSTM-CRF â no char LSTM-CRF S-LSTM â no char S-LSTM F1 81.39 82.21 81.83 82.95 83.44 85.75 79.46 83.93
Table 4: Spanish NER (CoNLL-2002 test set). * indicates mod- els trained with the use of external labeled data
word embeddings and dropout had on our LSTM- CRF model. We observed that pretraining our word embeddings gave us the biggest improvement in overall performance of +7.31 in F1. The CRF layer gave us an increase of +1.79, while using dropout resulted in a difference of +1.17 and ï¬nally learning character-level word embeddings resulted in an increase of about +0.74. For the Stack-LSTM we performed a similar set of experiments. Results with different architectures are given in table 5. | 1603.01360#30 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 31 | Model LSTM LSTM-CRF LSTM-CRF LSTM-CRF LSTM-CRF LSTM-CRF S-LSTM S-LSTM S-LSTM S-LSTM S-LSTM Variant char + dropout + pretrain char + dropout pretrain pretrain + char pretrain + dropout pretrain + dropout + char char + dropout pretrain pretrain + char pretrain + dropout pretrain + dropout + char F1 89.15 83.63 88.39 89.77 90.20 90.94 80.88 86.67 89.32 87.96 90.33
Table 5: English NER results with our models, using differ- ent conï¬gurations. âpretrainâ refers to models that include pre- trained word embeddings, âcharâ refers to models that include character-based modeling of words, âdropoutâ refers to models that include dropout rate.
# 6 Related Work | 1603.01360#31 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 32 | # 6 Related Work
In the CoNLL-2002 shared task, Carreras et al. (2002) obtained among the best results on both Dutch and Spanish by combining several small ï¬xed-depth decision trees. Next year, in the CoNLL- 2003 Shared Task, Florian et al. (2003) obtained the best score on German by combining the output of four diverse classiï¬ers. Qi et al. (2009) later im- proved on this with a neural network by doing unsu- pervised learning on a massive unlabeled corpus. | 1603.01360#32 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 33 | Several other neural architectures have previously been proposed for NER. For instance, Collobert et al. (2011) uses a CNN over a sequence of word em- beddings with a CRF layer on top. This can be thought of as our ï¬rst model without character-level embeddings and with the bidirectional LSTM be- ing replaced by a CNN. More recently, Huang et al. (2015) presented a model similar to our LSTM-CRF, but using hand-crafted spelling features. Zhou and Xu (2015) also used a similar model and adapted it to the semantic role labeling task. Lin and Wu (2009) used a linear chain CRF with L2 regular- ization, they added phrase cluster features extracted from the web data and spelling features. Passos et al. (2014) also used a linear chain CRF with spelling features and gazetteers.
Language independent NER models like ours have also been proposed in the past. Cucerzan | 1603.01360#33 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 34 | Language independent NER models like ours have also been proposed in the past. Cucerzan
and Yarowsky (1999; 2002) present semi-supervised bootstrapping algorithms for named entity recogni- tion by co-training character-level (word-internal) and token-level (context) features. Eisenstein et al. (2011) use Bayesian nonparametrics to construct a database of named entities in an almost unsu- pervised setting. Ratinov and Roth (2009) quanti- tatively compare several approaches for NER and build their own supervised model using a regular- ized average perceptron and aggregating context in- formation.
Finally, there is currently a lot of interest in mod- els for NER that use letter-based representations. Gillick et al. (2015) model the task of sequence- labeling as a sequence to sequence learning prob- lem and incorporate character-based representations into their encoder model. Chiu and Nichols (2015) employ an architecture similar to ours, but instead use CNNs to learn character-level features, in a way similar to the work by Santos and GuimarËaes (2015).
# 7 Conclusion
This paper presents two neural architectures for se- quence labeling that provide the best NER results ever reported in standard evaluation settings, even compared with models that use external resources, such as gazetteers. | 1603.01360#34 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 35 | A key aspect of our models are that they model output label dependencies, either via a simple CRF architecture, or using a transition-based algorithm to explicitly construct and label chunks of the in- put. Word representations are also crucially impor- tant for success: we use both pre-trained word rep- resentations and âcharacter-basedâ representations that capture morphological and orthographic infor- mation. To prevent the learner from depending too heavily on one representation class, dropout is used.
# Acknowledgments
This work was sponsored in part by the Defense Advanced Research Projects Agency (DARPA) Information Innovation Ofï¬ce (I2O) under the Low Resource Languages for Emergent Incidents (LORELEI) program issued by DARPA/I2O under Contract No. HR0011-15-C-0114. Miguel Balles- teros is supported by the European Commission un- der the contract numbers FP7-ICT-610411 (project
MULTISENSOR) and H2020-RIA-645012 (project KRISTINA).
# References | 1603.01360#35 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 36 | MULTISENSOR) and H2020-RIA-645012 (project KRISTINA).
# References
[Ando and Zhang2005a] Rie Kubota Ando and Tong Zhang. 2005a. A framework for learning predictive structures from multiple tasks and unlabeled data. The Journal of Machine Learning Research, 6:1817â1853. [Ando and Zhang2005b] Rie Kubota Ando and Tong Zhang. 2005b. Learning predictive structures. JMLR, 6:1817â1853.
[Ballesteros et al.2015] Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015. Improved transition-based dependency parsing by modeling characters instead of words with LSTMs. In Proceedings of EMNLP. [Ballesteros et al.2016] Miguel Ballesteros, Yoav Gold- erg, Chris Dyer, and Noah A. Smith. 2016. Train- ing with Exploration Improves a Greedy Stack-LSTM Parser. In arXiv:1603.03793.
[Bengio et al.1994] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term depen- dencies with gradient descent is difï¬cult. Neural Net- works, IEEE Transactions on, 5(2):157â166. | 1603.01360#36 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 37 | [Biemann et al.2007] Chris Biemann, Gerhard Heyer, Uwe Quasthoff, and Matthias Richter. 2007. The leipzig corpora collection-monolingual corpora of standard size. Proceedings of Corpus Linguistic.
Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Przybocki, and Omar F Zaidan. Findings of the 2010 joint workshop on statistical machine In translation and metrics for machine translation. Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 17â53. Association for Computational Linguistics.
[Carreras et al.2002] Xavier Carreras, Llu´ıs M`arquez, and Llu´ıs Padr´o. 2002. Named entity extraction using ad- aboost, proceedings of the 6th conference on natural language learning. August, 31:1â4.
[Chiu and Nichols2015] Jason PC Chiu and Eric Nichols. 2015. Named entity recognition with bidirectional lstm-cnns. arXiv preprint arXiv:1511.08308. | 1603.01360#37 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 38 | [Collobert et al.2011] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language process- ing (almost) from scratch. The Journal of Machine Learning Research, 12:2493â2537. [Cucerzan and Yarowsky1999] Silviu
and Cucerzan David Yarowsky. Language independent named entity recognition combining morphological and contextual evidence. In Proceedings of the 1999
Joint SIGDAT Conference on EMNLP and VLC, pages 90â99.
and David Yarowsky. 2002. Language independent ner using a uniï¬ed model of internal and contextual In proceedings of the 6th conference on evidence. Natural language learning-Volume 20, pages 1â4. Association for Computational Linguistics.
[Dai et al.2015] Hong-Jie Dai, Po-Ting Lai, Yung-Chun Chang, and Richard Tzong-Han Tsai. 2015. Enhanc- ing of chemical compound and drug name recogni- tion using representative tag scheme and ï¬ne-grained Journal of cheminformatics, 7(Suppl tokenization. 1):S14. | 1603.01360#38 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 39 | [Dyer et al.2015] Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015. Transition-based dependency parsing with stack long short-term memory. In Proc. ACL. [Eisenstein et al.2011] Jacob Eisenstein,
Tae Yano, William W Cohen, Noah A Smith, and Eric P Xing. 2011. Structured databases of named entities from bayesian nonparametrics. In Proceedings of the First Workshop on Unsupervised Learning in NLP, pages 2â12. Association for Computational Linguistics.
Ittycheriah, [Florian et al.2003] Radu Florian, Abe 2003. Named Hongyan Jing, and Tong Zhang. entity recognition through classiï¬er combination. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 168â171. Association for Computational Linguistics.
[Gillick et al.2015] Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilin- gual language processing from bytes. arXiv preprint arXiv:1512.00103. | 1603.01360#39 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 40 | [Graff2011] David Graff. 2011. Spanish gigaword third edition (ldc2011t12). Linguistic Data Consortium, Univer-sity of Pennsylvania, Philadelphia, PA.
[Graves and Schmidhuber2005] Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classiï¬- In Proc. cation with bidirectional LSTM networks. IJCNN.
[Hinton et al.2012] Geoffrey E Hinton, Nitish Srivas- tava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580.
[Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â1780.
[Hoffart et al.2011] Johannes Hoffart, Mohamed Amir Ilaria Bordino, Hagen F¨urstenau, Manfred Yosef, Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, | 1603.01360#40 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 41 | and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 782â792. Association for Compu- tational Linguistics.
[Huang et al.2015] Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991.
[Kim et al.2015] Yoon Kim, Yacine Jernite, David Son- tag, and Alexander M. Rush. 2015. Character-aware neural language models. CoRR, abs/1508.06615. [Kingma and Ba2014] Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980.
[Lafferty et al.2001] John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random ï¬elds: Probabilistic models for segmenting and label- ing sequence data. In Proc. ICML. | 1603.01360#41 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 42 | [Lin and Wu2009] Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In Pro- ceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP: Volume 2-Volume 2, pages 1030â1038. As- sociation for Computational Linguistics.
[Ling et al.2015a] Wang Ling, Lin Chu-Cheng, Yulia Tsvetkov, Silvio Amir, R´amon Fernandez Astudillo, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015a. Not all contexts are created equal: Better word representations with variable attention. In Proc. EMNLP.
[Ling et al.2015b] Wang Ling, Tiago Lu´ıs, Lu´ıs Marujo, Ram´on Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015b. Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). | 1603.01360#42 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 43 | [Luo et al.2015] Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint named entity recog- nition and disambiguation. In Proc. EMNLP.
[Mikolov et al.2013a] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a. Efï¬cient estima- tion of word representations in vector space. arXiv preprint arXiv:1301.3781.
Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proc. NIPS.
[Nivre2004] Joakim Nivre. 2004. Incrementality in de- In Proceedings of terministic dependency parsing. the Workshop on Incremental Parsing: Bringing En- gineering and Cognition Together. | 1603.01360#43 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 44 | [Nothman et al.2013] Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R Curran. 2013. Learning multilingual named entity recognition from wikipedia. Artiï¬cial Intelligence, 194:151â175. [Parker et al.2009] Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2009. English gigaword fourth edition (ldc2009t13). Linguistic Data Consortium, Univer-sity of Pennsylvania, Philadel- phia, PA.
[Passos et al.2014] Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase arXiv embeddings for named entity resolution. preprint arXiv:1404.5367.
[Qi et al.2009] Yanjun Qi, Ronan Collobert, Pavel Kuksa, Koray Kavukcuoglu, and Jason Weston. 2009. Com- bining labeled and unlabeled data with word-class dis- In Proceedings of the 18th ACM tribution learning. conference on Information and knowledge manage- ment, pages 1737â1740. ACM. | 1603.01360#44 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 45 | [Ratinov and Roth2009] Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thir- teenth Conference on Computational Natural Lan- guage Learning, pages 147â155. Association for Computational Linguistics.
[Santos and GuimarËaes2015] Cicero Nogueira dos Santos and Victor GuimarËaes. 2015. Boosting named entity recognition with neural character embeddings. arXiv preprint arXiv:1505.05008.
[Tjong Kim Sang and De Meulder2003] Erik F. Tjong Kim Sang and Fien De Meulder. 2003. Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proc. CoNLL.
[Tjong Kim Sang2002] Erik F. Tjong Kim Sang. 2002. Introduction to the conll-2002 shared task: Language- In Proc. independent named entity recognition. CoNLL.
and Yoshua Bengio. 2010. Word representations: A sim- ple and general method for semi-supervised learning. In Proc. ACL.
[Zeiler2012] Matthew D Zeiler. An adaptive learning rate method. arXiv:1212.5701. 2012. Adadelta: arXiv preprint | 1603.01360#45 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01360 | 46 | [Zeiler2012] Matthew D Zeiler. An adaptive learning rate method. arXiv:1212.5701. 2012. Adadelta: arXiv preprint
[Zhang and Clark2011] Yue Zhang and Stephen Clark. 2011. Syntactic processing using the generalized per- ceptron and beam search. Computational Linguistics, 37(1).
[Zhang et al.2015] Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classiï¬cation. In Advances in Neural Informa- tion Processing Systems, pages 649â657.
[Zhou and Xu2015] Jie Zhou and Wei Xu. 2015. End-to- end learning of semantic role labeling using recurrent
neural networks. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics. | 1603.01360#46 | Neural Architectures for Named Entity Recognition | State-of-the-art named entity recognition systems rely heavily on
hand-crafted features and domain-specific knowledge in order to learn
effectively from the small, supervised training corpora that are available. In
this paper, we introduce two new neural architectures---one based on
bidirectional LSTMs and conditional random fields, and the other that
constructs and labels segments using a transition-based approach inspired by
shift-reduce parsers. Our models rely on two sources of information about
words: character-based word representations learned from the supervised corpus
and unsupervised word representations learned from unannotated corpora. Our
models obtain state-of-the-art performance in NER in four languages without
resorting to any language-specific knowledge or resources such as gazetteers. | http://arxiv.org/pdf/1603.01360 | Guillaume Lample, Miguel Ballesteros, Sandeep Subramanian, Kazuya Kawakami, Chris Dyer | cs.CL | Proceedings of NAACL 2016 | null | cs.CL | 20160304 | 20160407 | [
{
"id": "1603.03793"
},
{
"id": "1511.08308"
},
{
"id": "1512.00103"
},
{
"id": "1505.05008"
}
] |
1603.01025 | 0 | 6 1 0 2
r a M 7 1 ] E N . s c [
2 v 5 2 0 1 0 . 3 0 6 1 : v i X r a
# Convolutional Neural Networks using Logarithmic Data Representation
# Daisuke Miyashita Stanford University, Stanford, CA 94305 USA Toshiba, Kawasaki, Japan
[email protected]
# Edward H. Lee Stanford University, Stanford, CA 94305 USA
# [email protected]
# Boris Murmann Stanford University, Stanford, CA 94305 USA
# [email protected]
# Abstract | 1603.01025#0 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 1 | Recent advances in convolutional neural net- works have considered model complexity and hardware efï¬ciency to enable deployment onto For embedded systems and mobile devices. example, it is now well-known that the arith- metic operations of deep networks can be en- coded down to 8-bit ï¬xed-point without signiï¬- cant deterioration in performance. However, fur- ther reduction in precision down to as low as 3-bit ï¬xed-point results in signiï¬cant losses in performance. In this paper we propose a new data representation that enables state-of-the-art networks to be encoded to 3 bits with negligi- ble loss in classiï¬cation performance. To per- form this, we take advantage of the fact that the weights and activations in a trained net- work naturally have non-uniform distributions. Using non-uniform, base-2 logarithmic repre- sentation to encode weights, communicate acti- vations, and perform dot-products enables net- works to 1) achieve higher classiï¬cation accura- cies than ï¬xed-point at the same resolution and 2) eliminate bulky digital multipliers. Finally, | 1603.01025#1 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 3 | # 1. Introduction
Deep convolutional neural networks (CNN) have demon- strated state-of-the-art performance in image classiï¬cation
(Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015) but have steadily grown in computational complexity. For example, the Deep Residual Learning (He et al., 2015) set a new record in image classiï¬cation accu- racy at the expense of 11.3 billion ï¬oating-point multiply- and-add operations per forward-pass of an image and 230 MB of memory to store the weights in its 152-layer net- work.
In order for these large networks to run in real-time ap- plications such as for mobile or embedded platforms, it is often necessary to use low-precision arithmetic and apply compression techniques. Recently, many researchers have successfully deployed networks that compute using 8-bit ï¬xed-point representation (Vanhoucke et al., 2011; Abadi et al., 2015) and have successfully trained networks with 16-bit ï¬xed point (Gupta et al., 2015). This work in par- ticular is built upon the idea that algorithm-level noise tol- erance of the network can motivate simpliï¬cations in hard- ware complexity. | 1603.01025#3 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 4 | Interesting directions point towards matrix factorization (Denton et al., 2014) and tensoriï¬cation (Novikov et al., 2015) by leveraging structure of the fully-connected (FC) layers. Another promising area is to prune the FC layer be- fore mapping this to sparse matrix-matrix routines in GPUs (Han et al., 2015b). However, many of these inventions aim at systems that meet some required and speciï¬c crite- ria such as networks that have many, large FC layers or ac- celerators that handle efï¬cient sparse matrix-matrix arith- metic. And with network architectures currently pushing towards increasing the depth of convolutional layers by set- tling for fewer dense FC layers (He et al., 2015; Szegedy et al., 2015), there are potential problems in motivating a one-size-ï¬ts-all solution to handle these computational and memory demands.
We propose a general method of representing and computConvolutional Neural Networks using Logarithmic Data Representation | 1603.01025#4 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 5 | We propose a general method of representing and computConvolutional Neural Networks using Logarithmic Data Representation
ing the dot products in a network that can allow networks with minimal constraint on the layer properties to run more efï¬ciently in digital hardware. In this paper we explore the use of communicating activations, storing weights, and computing the atomic dot-products in the binary logarith- mic (base-2 logarithmic) domain for both inference and training. The motivations for moving to this domain are the following:
⢠Training networks with weight decay leads to ï¬nal weights that are distributed non-uniformly around 0.
⢠Similarly, activations are also highly concentrated near 0. Our work uses rectiï¬ed Linear Units (ReLU) as the non-linearity.
⢠Logarithmic representations can encode data with very large dynamic range in fewer bits than can ï¬xed- point representation (Gautschi et al., 2016).
⢠Data representation in log-domain is naturally en- coded in digital hardware (as shown in Section 4.3). | 1603.01025#5 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 6 | encoded to as little as 5 bits without a signiï¬cant accuracy penalty. There has also been recent work in training us- ing low precision arithmetic. (Gupta et al., 2015) propose a stochastic rounding scheme to help train networks using 16-bit ï¬xed-point. (Lin et al., 2015) propose quantized back-propagation and ternary connect. This method re- duces the number of ï¬oating-point multiplications by cast- ing these operations into powers-of-two multiplies, which are easily realized with bitshifts in digital hardware. They apply this technique on MNIST and CIFAR10 with lit- tle loss in performance. However, their method does not completely eliminate all multiplications end-to-end. Dur- ing test-time the network uses the learned full resolution weights for forward propagation. Training with reduced precision is motivated by the idea that high-precision gra- dient updates is unnecessary for the stochastic optimization of networks (Bottou & Bousquet, 2007; Bishop, 1995; Au- dhkhasi et al., 2013). In fact, there are some studies that show that gradient noise helps convergence. For example, (Neelakantan et | 1603.01025#6 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 8 | Our contributions are listed:
⢠we show that networks obtain higher classiï¬cation accuracies with logarithmic quantization than linear quantization using traditional ï¬xed-point at equivalent resolutions.
⢠we show that activations are more robust to quantiza- tion than weights. This is because the number of ac- tivations tend to be larger than the number of weights which are reused during convolutions.
⢠we apply our logarithmic data representation on state- of-the-art networks, allowing activations and weights to use only 3b with almost no loss in classiï¬cation performance.
Hardware implementations. There have been a few but signiï¬cant advances in the development of specialized hardware of large networks. For example (Farabet et al., 2010) developed Field-Programmable Gate Arrays (FPGA) to perform real-time forward propagation. These groups have also performed a comprehensive study of classiï¬ca- tion performance and energy efï¬ciency as function of res- olution. (Zhang et al., 2015) have also explored the design of convolutions in the context of memory versus compute management under the RoofLine model. Other works fo- cus on specialized, optimized kernels for general purpose GPUs (Chetlur et al., 2014).
# 3. Concept and Motivation | 1603.01025#8 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 9 | # 3. Concept and Motivation
⢠we generalize base-2 arithmetic to handle different 2 enables base. In particular, we show that a base- the ability to capture large dynamic ranges of weights and activations but also ï¬ner precisions across the en- coded range of values as well.
⢠we develop logarithmic backpropagation for efï¬cient training.
Each convolutional and fully-connected layer of a network performs matrix operations that distills down to dot prod- ucts y = wT x, where x â Rn is the input, w â Rn the weights, and y the activations before being transformed by the non-linearity (e.g. ReLU). Using conventional digital hardware, this operation is performed using n multiply- and-add operations using ï¬oating or ï¬xed point represen- tation as shown in Figure 1(a). However, this dot product can also be computed in the log-domain as shown in Fig- ure 1(b,c).
# 2. Related work | 1603.01025#9 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 10 | # 2. Related work
Reduced-precision computation. (Shin et al., 2016; Sung et al., 2015; Vanhoucke et al., 2011; Han et al., 2015a) ana- lyzed the effects of quantizing the trained weights for infer- ence. For example, (Han et al., 2015b) shows that convo- lutional layers in AlexNet (Krizhevsky et al., 2012) can be
# 3.1. Proposed Method 1.
The ï¬rst proposed method as shown in Figure 1(b) is to transform one operand to its log representation, convert the resulting transformation back to the linear domain, and
Convolutional Neural Networks using Logarithmic Data Representation
# 3.2. Proposed Method 2.
(a) Conventional w y 4 32b float wv + es From memory me dw xx LARGE bandwidth Multiply-Accumulate (b) Proposed 1 y 4 Fl Â¥ LARGE bandwidth wv oo loator FI Fixed â H-âa=» â_ From memory [fim Yw«x Leftmost â1â To memory SMALL bandwidth ait shift-Accumulate position SMALL bandwidth (c) Proposed 2 y log, w i] Ab fixed = = O = From memory {i Eq. (3),(4) To memory SMALL bandwidth SMALL bandwidth | 1603.01025#10 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 11 | The second proposed method as shown in Figure 1(c) is to extend the ï¬rst method to compute dot products in the log-domain for both operands. Additions in linear-domain map to sums of exponentials in the log-domain and mul- tiplications in linear become log-addition. The resulting dot-product is
whe ~ J) 2 Qvantizetoga (on) +t (0gs (20) i=l n = S> Bitshift(1, @; + #;), (2) i=l
Figure 1. Concept and motivation of this study.
where the Quantize(log2(wi)) and Ëxi = Quantize(log2(xi)). log-domain weights log-domain are Ëwi inputs = are
multiply this by the other operand. This is simply
By transforming both the weights and inputs, we compute the original dot product by bitshifting 1 by an integer result Ëwi + Ëxi and summing over all i.
n Ss wi x QF i=1 = )_Bitshift(w;,%,), (1) i=1 wie ~ | 1603.01025#11 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 12 | n Ss wi x QF i=1 = )_Bitshift(w;,%,), (1) i=1 wie ~
where Ëxi = Quantize(log2(xi)), Quantize(â¢) quantizes ⢠to an integer, and Bitshift(a, b) is the function that bit- shifts a value a by an integer b in ï¬xed-point arithmetic. In ï¬oating-point, this operation is simply an addition of b with the exponent part of a. Taking advantage of the Bitshift(a, b) operator to perform multiplication obviates the need for expensive digital multipliers.
# 3.3. Accumulation in log domain
Although Fig. 1(b,c) indicates a logarithm-to-linear con- verter between layers where the actual accumulation is per- formed in the linear domain, this accumulation is able to be performed in the log-domain using the approximation log,(1 + 2) ~ x forO0 < a < 1. For example, let Sn = WT +... +FWrXn, Sn = logy (Sn), and pj = W;+Zj. When n = 2,
2 Bo logs (= Bitshift (1, ») ~ max (p1,)2) + Bitshift (1, â|P1 â p2|), (3) | 1603.01025#12 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 13 | 2 Bo logs (= Bitshift (1, ») ~ max (p1,)2) + Bitshift (1, â|P1 â p2|), (3)
Quantizing the activations and weights in the log-domain (logs(a) and logs (w)) instead of x and w is also motivated by leveraging structure of the non-uniform distributions of x and w. A detailed treatment is shown in the next section. In order to quantize, we propose two hardware-friendly fla- vors. The first option is to simply floor the input. This method computes |log(w)| by returning the position of the first 1 bit seen from the most significant bit (MSB). The second option is to round to the nearest integer, which is more precise than the first option. With the latter op- tion, after computing the integer part, the fractional part is computed in order to assert the rounding direction. This method of rounding is summarized as follows. Pick m bits followed by the leftmost 1 and consider it as a fixed point number F° with 0 integer bit and m fractional bits. Then, if F > /2-â1,round F up to the nearest integer and other- wise round it down to the nearest integer.
and for n in general,
$n ~ max (Snâ1, Pn) + Bitshift (1,â|[Sn-1] â Bnl). @ | 1603.01025#13 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 14 | and for n in general,
$n ~ max (Snâ1, Pn) + Bitshift (1,â|[Sn-1] â Bnl). @
Note that Ësi preserves the fractional part of the word dur- ing accumulation. Both accumulation in linear domain and accumulation in log domain have its pros and cons. Ac- cumulation in linear domain is simpler but requires larger bit widths to accommodate large dynamic range numbers. Accumulation in log in (3) and (4) appears to be more com- plicated, but is in fact simply computed using bit-wise op- erations in digital hardware.
# 4. Experiments of Proposed Methods
Here we evaluate our methods as detailed in Sections 3.1 and 3.2 on the classiï¬cation task of ILSVRC-2012 (Deng
Convolutional Neural Networks using Logarithmic Data Representation
Table 1. Structure of AlexNet(Krizhevsky et al., 2012) with quan- tization
Table 2. Structure of VGG16(Simonyan & Zisserman, 2014) with quantization | 1603.01025#14 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 15 | layer # Weight # Input FSR ReLU(Conv1) LogQuant1 LRN1 Pool1 ReLU(Conv2) LogQuant2 LRN2 Pool2 ReLU(Conv3) LogQuant3 ReLU(Conv4) LogQuant4 ReLU(Conv5) LogQuant5 Pool5 ReLU(FC6) LogQuant6 ReLU(FC7) LogQuant7 FC8 96 · 3 · 112 - - - 256 · 96 · 52 - - - 384 · 256 · 32 - 384 · 384 · 32 - 256 · 384 · 32 - - 4096 · 256 · 62 - 4096 · 4096 - 1000 · 4096 3 · 2272 96 · 552 - 96 · 552 96 · 272 256 · 272 - 256 · 272 256 · 132 384 · 132 384 · 132 384 · 132 384 · 132 256 · 132 256 · 132 256 · 62 4096 4096 4096 4096 - fsr + 3 - - - fsr + 3 - - - fsr + 4 - fsr + 3 - fsr + 3 - - fsr + 1 - fsr et al., 2009) using Chainer (Tokui et al., 2015). We eval- uate method 1 (Section 3.1) on inference (forward pass) in Section 4.1. Similarly, we evaluate method 2 (Section 3.2) on inference in Sections 4.2 and | 1603.01025#15 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 16 | 3.1) on inference (forward pass) in Section 4.1. Similarly, we evaluate method 2 (Section 3.2) on inference in Sections 4.2 and 4.3. For those ex- periments, we use published models (AlexNet (Krizhevsky et al., 2012), VGG16 (Simonyan & Zisserman, 2014)) from the caffe model zoo ((Jia et al., 2014)) without any ï¬ne tun- ing (or extra retraining). Finally, we evaluate method 2 on training in Section 4.4. | 1603.01025#16 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 17 | layer # Weight # Input FSR
# 4.1. Logarithmic Representation of Activations
This experiment evaluates the classiï¬cation accuracy us- ing logarithmic activations and ï¬oating point 32b for the weights. In similar spirit to that of (Gupta et al., 2015), we describe the logarithmic quantization layer LogQuant that performs the element-wise operation as follows:
0 9% «x=0, LogQuant(«, bitwidth, FSR) = otherwise
where
% = Clip (Round (log, (|x|)), FSR â 2°" FSR), (©) 0 x < min, Clip(z, min, max) = 4 maxâ1 «> max, (7) x otherwise.
These layers perform the logarithmic quantization and computation as detailed in Section 3.1. Tables 1 and 2
(6)
(7) | 1603.01025#17 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 18 | These layers perform the logarithmic quantization and computation as detailed in Section 3.1. Tables 1 and 2
(6)
(7)
illustrate the addition of these layers to the models. The quantizer has a speciï¬ed full scale range, and this range in linear scale is 2FSR, where we express this as simply FSR throughout this paper for notational convenience. The FSR values for each layer are shown in Tables 1 and 2; they show fsr added by an offset parameter. This offset param- eter is chosen to properly handle the variation of activation ranges from layer to layer using 100 images from the train- ing set. The fsr is a parameter which is global to the net- work and is tuned to perform the experiments to measure the effect of FSR on classiï¬cation accuracy. The bitwidth is the number of bits required to represent a number after quantization. Note that since we assume applying quanti- zation after ReLU function, x is 0 or positive and then we
Convolutional Neural Networks using Logarithmic Data Representation
use unsigned format without sign bit for activations.
In order to evaluate our logarithmic representation, we de- tail an equivalent linear quantization layer described as
LinearQuant(z, bitwidth, FSR) = Clip (Row (5) x step, 0, zen) step
and where | 1603.01025#18 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 20 | Figure 2 illustrates the effect of the quantizer on activa- tions following the conv2 2 layer used in VGG16. The pre- quantized distribution tends to 0 exponentially, and the log- quantized distribution illustrates how the log-encoded acti- vations are uniformly equalized across many output bins which is not prevalent in the linear case. Many smaller activation values are more ï¬nely represented by log quan- tization compared to linear quantization. The total quanti- zation error 1 N ||Quantize(x) â x||1, where Quantize(â¢) is LogQuant(â¢) or LinearQuant(â¢), x is the vectorized ac- tivations of size N , is less for the log-quantized case than for linear. This result is illustrated in Figure 3. Using linear quantization with step size of 1024, we obtain a distribu- tion of quantization errors that are highly concentrated in the region where |LinearQuant(x) â x| < 512. How- ever, log quantization with the bitwidth as linear results in a signiï¬cantly lower number of quantization errors in the region 128 < |LogQuant(x) â x| < 512. This comes at the expense of a slight increase in errors | 1603.01025#20 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 21 | lower number of quantization errors in the region 128 < |LogQuant(x) â x| < 512. This comes at the expense of a slight increase in errors in the region 512 < |LogQuant(x) â x|. Nonetheless, the quantiza- tion errors 1 N ||LogQuant(x) â x||1 = 34.19 for log and 1 N ||LogQuant(x) â x||1 = 102.89 for linear. We run the models as described in Tables 1 and 2 and test on the validation set without data augmentation. We evalu- ate it with variable bitwidths and FSRs for both quantizer layers. | 1603.01025#21 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 22 | (8)
i: | | 1 4 0 1024 2048 3072 4096 5120 6144 7168 8192 Value of activation Count (log scale, a.u.)
Figure 2. Distribution of activations of conv2 2 layer in VGG16 before and after log and linear quantization. The order (from top to bottom) is: before log-quantization, after log-quantization, be- fore linear quantization, and after linear quantization. The color highlights the binning process of these two quantizers.
by 4b linear for VGG16. Third, with 4b log, there is no loss in top-5 accuracy from the original ï¬oat32 representation.
Table 3. Top-5 accuracies with quantized activations at optimal FSRs
Model AlexNet VGG16 Float 32b Log. 3b Log. 4b Linear 3b Linear 4b 78.3% 76.9%(fsr = 7) 76.9%(fsr = 15) 77.1%(fsr = 5) 77.6%(fsr = 5) 89.8% 89.2%(fsr = 6) 89.8%(fsr = 11) 83.0%(fsr = 3) 89.4%(fsr = 4) | 1603.01025#22 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 23 | Figure 4 illustrates the results of AlexNet. Using only 3 bits to represent the activations for both logarithmic and linear quantizations, the top-5 accuracy is still very close to that of the original, unquantized model encoded at ï¬oating-point 32b. However, logarithmic representations tolerate a large dynamic range of FSRs. For example, using 4b log, we can obtain 3 order of magnitude variations in the full scale without a signiï¬cant loss of top-5 accuracy. We see similar results for VGG16 as shown in Figure 5. Table 3 lists the classiï¬cation accuracies with the optimal FSRs for each case. There are some interesting observations. First, 3b log performs 0.2% worse than 3b linear for AlexNet but 6.2% better for VGG16, which is a higher capacity network than AlexNet. Second, by encoding the activations in 3b log, we achieve the same top-5 accuracy compared to that achieved
# 4.2. Logarithmic Representation of Weights of Fully Connected Layers | 1603.01025#23 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 24 | # 4.2. Logarithmic Representation of Weights of Fully Connected Layers
The FC weights are quantized using the same strategies as those in Section 4.1, except that they have sign bit. We evaluate the classiï¬cation performance using log data rep- resentation for both FC weights and activations jointly us- ing method 2 in Section 3.2. For comparison, we use lin- ear for FC weights and log for activations as reference. For both methods, we use optimal 4b log for activations that were computed in Section 4.1.
Table 4 compares the mentioned approaches along with ï¬oating point. We observe a small 0.4% win for log over linear for AlexNet but a 0.2% decrease for VGG16. Nonetheless, log computation is performed without the use of multipliers.
Convolutional Neural Networks using Logarithmic Data Representation
' log quantization an â__|[LogQuant(x)âs|,/N=34.19 alys linear quantization o : ~ 7 ||LinearQuant(x)â2||,/N =102.89 a oO a Ale o = _ = S fo} 1S) 0 128 256 384 512 640 |LogQuant(x)â2|, |LinearQuant(x)â2| | 1603.01025#24 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 25 | -- log quantization 3b â _ linear quantization 4b -- linear quantization 3b -- float 32b â log quantization 4b 2 2 lf 6 Top-5 Accuracy 2 a Full Scale Range (2/*")
Figure 3. Comparison of the quantization error distribution be- tween logarithmic quantization and linear quantization
Figure 5. Top5 Accuracy vs Full scale range: VGG16
-- log quantization 3b â _ linear quantization 4b -- linear quantization 3b -- float 32b â log quantization 4b Top-5 Accuracy co 98 Se Sh oo ° a | ES * q Full Scale Range (2/*")
Figure 4. Top5 Accuracy vs Full scale range: AlexNet
# 4.3. Logarithmic Representation of Weights of Convolutional Layers
We now represent the convolutional layers using the same procedure. We keep the representation of activations at 4b log and the representation of weights of FC layers at 4b log, and compare our log method with the linear reference and ideal ï¬oating point. We also perform the dot products using two different bases: 2, 2. Note that there is no additional overhead for log base- 2 as it is computed with the same equation shown in Equation 4. | 1603.01025#25 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 26 | Table 5 shows the classiï¬cation results. The results illus- trate an approximate 6% drop in performance from ï¬oating point down to 5b base-2 but a relatively minor 1.7% drop 2. They includes sign bit. There are also for 5b base- some important observations here.
Table 5. Top-5 accuracy after applying quantization to weights of convolutional layers
Table 4. Top-5 accuracy after applying quantization to weights of FC layers
Model AlexNet VGG16 Float 32b 76.9% 89.8% Log. 4b 76.8% 89.5% Linear 4b 76.4% 89.7%
Model Float 32b Linear 5b Base-2 Log 5b â 2 Log 5b Base- AlexNet VGG16 76.8% 73.6% 70.6% 89.5% 85.1% 83.4% 75.1% 89.0% | 1603.01025#26 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 27 | An added beneï¬t to quantization is a reduction of the model size. By quantizing down to 4b log including sign bit, we compress the FC weights for free signiï¬cantly from 1.9 Gb to 0.27 Gb for AlexNet and 4.4 Gb to 0.97 Gb for VGG16. This is because the dense FC layers occupy 98.2% and 89.4% of the total model size for AlexNet and VGG16 re- spectively.
We ï¬rst observe that the weights of the convolutional layers for AlexNet and VGG16 are more sensitive to quantization than are FC weights. Each FC weight is used only once per image (batch size of 1) whereas convolutional weights are reused many times across the layerâs input activation map. Because of this, the quantization error of each weight now inï¬uences the dot products across the entire activation volume. Second, we observe that by moving from 5b base- 2, we allow the 2 to a ï¬ner granularity such as 5b baseConvolutional Neural Networks using Logarithmic Data Representation
network to 1) be robust to quantization errors and degrada- tion in classiï¬cation performance and 2) retain the practical features of log-domain arithmetic. | 1603.01025#27 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 29 | The distributions of quantization errors for both 5b base-2 2 are shown in Figure 6. The total quanti- and 5b base- zation error on the weights, 1 N ||Quantize(x) â x||1, where x is the vectorized weights of size N , is 2à smaller for baseAlgorithm 1 Training a CNN with base-2 logarithmic rep- resentation. C is the softmax loss for each minibatch. LogQuant(x) quantizes x in base-2 log-domain. The op- timization step Update(Wk,gWk ) updates the weights Wk based on backpropagated gradients gWk . We use the SGD with momentum and Adam rule. Require: a minibatch of inputs and targets (a0, aâ), previbased on backpropagated gradients gy. We use with momentum and Adam rule. Require: a minibatch of inputs and targets (ao, a*), ous weights W. Ensure: updated weights W'+! {1. Computing the parametersâ gradient: } {1.1. Forward propagation: } for k = 1 to Ldo Wi © LogQuant(W) a), â ReLU (af_, Wp) aj. â LogQuant(a;) | 1603.01025#29 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 31 | # 4.4. Training with Logarithmic Representation
We incorporate log representation during the training phase. This entire algorithm can be computed using Method 2 in Section 3.2. Table 6 illustrates the networks that we compare. The proposed log and linear networks are trained at the same resolution using 4-bit unsigned ac- tivations and 5-bit signed weights and gradients using Al- gorithm 1 on the CIFAR10 dataset with simple data aug- mentation described in (He et al., 2015). Note that un- like BinaryNet (Courbariaux & Bengio, 2016), we quantize the backpropagated gradients to train log-net. This enables end-to-end training using logarithmic representation at the 5-bit level. For linear quantization however, we found it necessary to keep the gradients in its unquantized ï¬oating- point precision form in order to achieve good convergence. Furthermore, we include the training curve for BinaryNet, which uses unquantized gradients. | 1603.01025#31 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 32 | 7 illustrates the training results of log, linear, and Fig. BinaryNet. Final test accuracies for log-5b, linear-5b, and BinaryNet are 0.9379, 0.9253, 0.8862 respectively where linear-5b and BinaryNet use unquantized gradients. The test results indicate that even with quantized gradients, our proposed network with log representation still outperforms the others that use unquantized gradients.
25 â float 32b 2.0 â _ log-5b FI ny â linear-5b 3 1s â linear-5b unquant. grad. 2 : BinaryNet unquant. grad. Ta ig £ 0.5 0.0 0 5 10 15 20 25 30 35 epoch 1.0 - r T 0.8 F o7 â float 32b (0.941) 5 06 â _log-5b (0.9379) % â _ linear-5b (0.2909) Me li â _linear-5b unquant. grad. (0.9253) Re 0.4F â BinaryNet unquant. grad. (0.8862) 0.3 0.2 NOVA Naan 0.1 0 5 10 15 20 25 30 35 epoch
Figure 7. Loss curves and test accuracies
Convolutional Neural Networks using Logarithmic Data Representation
# 5. Conclusion | 1603.01025#32 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 33 | Figure 7. Loss curves and test accuracies
Convolutional Neural Networks using Logarithmic Data Representation
# 5. Conclusion
Table 6. Structure of VGG-like network for CIFAR10
In this paper, we describe a method to represent the weights and activations with low resolution in the log-domain, which eliminates bulky digital multipliers. This method is also motivated by the non-uniform distributions of weights and activations, making log representation more robust to quantization as compared to linear. We evaluate our meth- ods on the classiï¬cation task of ILSVRC-2012 using pre- trained models (AlexNet and VGG16). We also offer ex- tensions that incorporate end-to-end training using log rep- resentation including gradients.
# log quantization
# linear quantization
# BinaryNet | 1603.01025#33 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 34 | Conv 64 · 3 · 32 BatchNorm ReLU LogQuant Conv 64 · 64 · 32 BatchNorm ReLU LogQuant MaxPool 2 à 2 Conv 128 · 64 · 32 BatchNorm ReLU LogQuant Conv 128 · 128 · 32 BatchNorm ReLU LogQuant MaxPool 2 à 2 Conv 256 · 128 · 32 BatchNorm ReLU LogQuant Conv 256 · 256 · 32 BatchNorm ReLU LogQuant Conv 256 · 256 · 32 BatchNorm ReLU LogQuant Conv 256 · 256 · 32 BatchNorm ReLU LogQuant MaxPool 2 à 2 FC 1024 · 256 · 42 BatchNorm ReLU LogQuant FC 1024 · 1024 BatchNorm ReLU LogQuant FC 10 · 1024 - Conv 64 · 3 · 32 BatchNorm ReLU LinearQuant Conv 64 · 64 · 32 BatchNorm ReLU LinearQuant MaxPool 2 à 2 Conv 128 · 64 · 32 BatchNorm ReLU LinearQuant Conv 128 · 128 · 32 BatchNorm ReLU LinearQuant MaxPool 2 à 2 Conv 256 · 128 · 32 BatchNorm ReLU LinearQuant Conv 256 · 256 · 32 BatchNorm ReLU LinearQuant Conv 256 · 256 · 32 BatchNorm ReLU LinearQuant Conv 256 · 256 · 32 BatchNorm ReLU LinearQuant MaxPool 2 à 2 FC 1024 · 256 · 42 BatchNorm ReLU LinearQuant FC 1024 · 1024 BatchNorm ReLU LinearQuant FC 10 · 1024 # References | 1603.01025#34 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 35 | Abadi, Mart´ın, Agarwal, Ashish, Barham, Paul, Brevdo, Eugene, Chen, Zhifeng, Citro, Craig, Corrado, Greg S., Davis, Andy, Dean, Jeffrey, Devin, Matthieu, Ghe- mawat, Sanjay, Goodfellow, Ian, Harp, Andrew, Irv- ing, Geoffrey, Isard, Michael, Jia, Yangqing, Jozefowicz, Rafal, Kaiser, Lukasz, Kudlur, Manjunath, Levenberg, Josh, Man´e, Dan, Monga, Rajat, Moore, Sherry, Murray,
Convolutional Neural Networks using Logarithmic Data Representation
Derek, Olah, Chris, Schuster, Mike, Shlens, Jonathon, Steiner, Benoit, Sutskever, Ilya, Talwar, Kunal, Tucker, Paul, Vanhoucke, Vincent, Vasudevan, Vijay, Vi´egas, Fernanda, Vinyals, Oriol, Warden, Pete, Wattenberg, Martin, Wicke, Martin, Yu, Yuan, and Zheng, Xiaoqiang. TensorFlow: Large-scale machine learning on heteroge- neous systems, 2015.
Solid- State Circuits Conference - (ISSCC), 2016 IEEE International. IEEE, 2016. | 1603.01025#35 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 36 | Solid- State Circuits Conference - (ISSCC), 2016 IEEE International. IEEE, 2016.
Gupta, Suyog, Agrawal, Ankur, Gopalakrishnan, Kailash, and Narayanan, Pritish. Deep learning with limited nu- In Proceedings of The 32nd Inter- merical precision. national Conference on Machine Learning (ICML2015), pp. 1737â1746, 2015.
Audhkhasi, Kartik, Osoba, Osonde, and Kosko, Bart. Noise beneï¬ts in backpropagation and deep bidirectional pre-training. In Proceedings of The 2013 International Joint Conference on Neural Networks (IJCNN), pp. 1â8. IEEE, 2013.
Han, Song, Mao, Huizi, and Dally, William J. Deep com- pression: Compressing deep neural network with prun- ing, trained quantization and huffman coding. arXiv preprint arXiv:1510.00149, 2015a.
Bishop, Christopher M. Training with noise is equivalent to tikhonov regularization. In Neural Computation, pp. 108â116, 1995. | 1603.01025#36 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 37 | Bishop, Christopher M. Training with noise is equivalent to tikhonov regularization. In Neural Computation, pp. 108â116, 1995.
Bottou, L´eon and Bousquet, Olivier. The tradeoffs of large scale learning. In Platt, J.C., Koller, D., Singer, Y., and Roweis, S.T. (eds.), Advances in Neural Information Processing Systems 20, pp. 161â168. Curran Associates, Inc., 2007.
Han, Song, Pool, Jeff, Tran, John, and Dally, William. Learning both weights and connections for efï¬cient neu- ral network. In Proceedings of Advances in Neural In- formation Processing Systems 28 (NIPS2015), pp. 1135â 1143, 2015b.
He, Kaiming, Zhang, Xiangyu, Ren, Shaoqing, and Sun, Jian. Deep residual learning for image recognition. arXiv preprint arXiv:1512.03385, 2015.
Chetlur, Sharan, Woolley, Cliff, Vandermersch, Philippe, Cohen, Jonathan, Tran, John, Catanzaro, Bryan, and Shelhamer, Evan. cudnn: Efï¬cient primitives for deep learning. In Proceedings of Deep Learning and Repre- sentation Learning Workshop: NIPS 2014, 2014. | 1603.01025#37 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
1603.01025 | 38 | Jia, Yangqing, Shelhamer, Evan, Donahue, Jeff, Karayev, Sergey, Long, Jonathan, Girshick, Ross, Guadarrama, Sergio, and Darrell, Trevor. Caffe: Convolutional ar- chitecture for fast feature embedding. In Proceedings of the 22nd ACM International Conference on Multimedia, pp. 675â678. ACM, 2014.
Courbariaux, Matthieu and Bengio, Yoshua. Binarynet: Training deep neural networks with weights and ac- arXiv preprint tivations constrained to +1 or -1. arXiv:1602.02830, 2016.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., and Fei- ImageNet: A Large-Scale Hierarchical Image Fei, L. Database. In CVPR09, 2009.
Denton, Emily, Zaremba, Wojciech, Bruna, Joan, LeCun, Yann, and Fergus, Rob. Exploiting linear structure within convolutional networks for efï¬cient evaluation. In Advances in Neural Information Processing Systems 27 (NIPS2014), pp. 1269â1277, 2014. | 1603.01025#38 | Convolutional Neural Networks using Logarithmic Data Representation | Recent advances in convolutional neural networks have considered model
complexity and hardware efficiency to enable deployment onto embedded systems
and mobile devices. For example, it is now well-known that the arithmetic
operations of deep networks can be encoded down to 8-bit fixed-point without
significant deterioration in performance. However, further reduction in
precision down to as low as 3-bit fixed-point results in significant losses in
performance. In this paper we propose a new data representation that enables
state-of-the-art networks to be encoded to 3 bits with negligible loss in
classification performance. To perform this, we take advantage of the fact that
the weights and activations in a trained network naturally have non-uniform
distributions. Using non-uniform, base-2 logarithmic representation to encode
weights, communicate activations, and perform dot-products enables networks to
1) achieve higher classification accuracies than fixed-point at the same
resolution and 2) eliminate bulky digital multipliers. Finally, we propose an
end-to-end training procedure that uses log representation at 5-bits, which
achieves higher final test accuracy than linear at 5-bits. | http://arxiv.org/pdf/1603.01025 | Daisuke Miyashita, Edward H. Lee, Boris Murmann | cs.NE, cs.LG | 10 pages, 7 figures | null | cs.NE | 20160303 | 20160317 | [
{
"id": "1510.03009"
},
{
"id": "1511.06488"
},
{
"id": "1602.02830"
},
{
"id": "1511.06807"
},
{
"id": "1512.03385"
},
{
"id": "1510.00149"
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.