id
stringlengths
12
15
title
stringlengths
8
162
content
stringlengths
1
17.6k
prechunk_id
stringlengths
0
15
postchunk_id
stringlengths
0
15
arxiv_id
stringlengths
10
10
references
sequencelengths
1
1
1603.04467#42
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Figure 7: Synchronous and asynchronous data parallel training Validating complex mathematical operations in the presence of an inherently stochastic system is quite chal- lenging. The strategies outlined above proved invaluable in gaining conï¬ dence in the system and ultimately in in- stantiating the Inception model in TensorFlow. The end result of these efforts resulted in a 6-fold speed improve- ment in training time versus our existing DistBelief im- plementation of the model and such speed gains proved indispensable in training a new class of larger-scale im- age recognition models.
1603.04467#41
1603.04467#43
1603.04467
[ "1502.02072" ]
1603.04467#43
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
# 7 Common Programming Idioms TensorFlowâ s basic dataï¬ ow graph model can be used in a variety of ways for machine learning applications. One domain we care about is speeding up training of com- putationally intensive neural network models on large datasets. This section describes several techniques that we and others have developed in order to accomplish this, and illustrates how to use TensorFlow to realize these various approaches. The approaches in this subsection assume that the model is being trained using stochastic gradient descent (SGD) with relatively modest-sized mini-batches of 100 to 1000 examples. # Data Parallel Training One simple technique for speeding up SGD is to paral- lelize the computation of the gradient for a mini-batch across mini-batch elements. For example, if we are us- ing a mini-batch size of 1000 elements, we can use 10 replicas of the model to each compute the gradient for 100 elements, and then combine the gradients and apply updates to the parameters synchronously, in order to be- have exactly as if we were running the sequential SGD algorithm with a batch size of 1000 elements. In this case, the TensorFlow graph simply has many replicas of the portion of the graph that does the bulk of the model computation, and a single client thread drives the entire training loop for this large graph. This is illustrated in the top portion of Figure 7. This approach can also be made asynchronous, where the TensorFlow graph has many replicas of the portion of the graph that does the bulk of the model computation, and each one of these replicas also applies the parame- ter updates to the model parameters asynchronously.
1603.04467#42
1603.04467#44
1603.04467
[ "1502.02072" ]
1603.04467#44
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
In this conï¬ guration, there is one client thread for each of the graph replicas. This is illustrated in the bottom por- tion of Figure 7. This asynchronous approach was also described in [14]. 11 Device 3 â _ @/os = Ec 3-â ¬ c gt aL Eee e. A y A P| ab(a | a) Figure 8: Model parallel training pdate ines imodel § inoie} ( @> f Gnput) im Gnput) ler Figure 9: Concurrent steps # Model Parallel Training Model parallel training, where different portions of the model computation are done on different computational devices simultaneously for the same batch of examples, is also easy to express in TensorFlow. Figure 8 shows an example of a recurrent, deep LSTM model used for sequence to sequence learning (see [47]), parallelized across three different devices. # Concurrent Steps for Model Computation Pipelining Another common way to get better utilization for train- ing deep neural networks is to pipeline the computation of the model within the same devices, by running a small number of concurrent steps within the same set of de- vices.
1603.04467#43
1603.04467#45
1603.04467
[ "1502.02072" ]
1603.04467#45
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
This is shown in Figure 9. It is somewhat similar to asynchronous data parallelism, except that the paral- lelism occurs within the same device(s), rather than repli- cating the computation graph on different devices. This allows â ï¬ lling in the gapsâ where computation of a sin- gle batch of examples might not be able to fully utilize the full parallelism on all devices at all times during a single step. 12 # 8 Performance A future version of this white paper will have a compre- hensive performance evaluation section of both the sin- gle machine and distributed implementations. # 9 Tools This section describes some tools we have developed that sit alongside the core TensorFlow graph execution en- gine. # 9.1 TensorBoard: Visualization of graph structures and summary statistics In order to help users understand the structure of their computation graphs and also to understand the overall behavior of machine learning models, we have built Ten- sorBoard, a companion visualization tool for TensorFlow that is included in the open source release. # Visualization of Computation Graphs Many of the computation graphs for deep neural net- works can be quite complex. For example, the computa- tion graph for training a model similar to Googleâ s Incep- tion model [48], a deep convolutional neural net that had the best classiï¬ cation performance in the ImageNet 2014 contest, has over 36,000 nodes in its TensorFlow compu- tation graph, and some deep recurrent LSTM models for language modeling have more than 15,000 nodes. Due to the size and topology of these graphs, naive vi- sualization techniques often produce cluttered and over- whelming diagrams. To help users see the underlying organization of the graphs, the algorithms in Tensor- Board collapse nodes into high-level blocks, highlighting groups with identical structures. The system also sep- arates out high-degree nodes, which often serve book- keeping functions, into a separate area of the screen. Do- ing so reduces visual clutter and focuses attention on the core sections of the computation graph. The entire visualization is interactive: users can pan, zoom, and expand grouped nodes to drill down for de- tails. An example of the visualization for the graph of a deep convolutional image model is shown in Figure 10. # Visualization of Summary Data
1603.04467#44
1603.04467#46
1603.04467
[ "1502.02072" ]
1603.04467#46
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
When training machine learning models, users often want to be able to examine the state of various aspects of the model, and how this state changes over time. To this end, TensorFlow supports a collection of different Summary operations that can be inserted into the graph, total_lo... train group_de... total_loss count [lif scetersu. Const O- gradient... global-s... 00%} globals. softmax global-s... total_lo... ore global_s. sgd LabelClasses total_lo... moving_a... -O old_grad... softmax_linear init init global_s... 8 more sgd moving_a... local4 group_de... init init 9 more sgd moving_a... local3 group_de... init init «9 more Figure 10: TensorBoard graph visualization of a convolutional neural network model nn1/biases nn1/biases:gradient nn1/weights 0,600 0,300 0,400 2,0000-4 0,200 0,200 0.00 0.100 0.00 0.00 -0.200 -2.0000-4 -0.100 -0.400 -0.200 -0.600 -4,0000-4 -0.300 ra ra ra Be 0.000 100.0k 200.0k 300.0k we 0.000 30.00k 60.00k + 90.00k Be 0.000 20.00k 40.00k 60.00k 80.00k Figure 11: TensorBoard graphical display of model summary statistics time series data including scalar summaries (e.g., for examining overall properties of the model, such as the value of the loss function averaged across a collection of examples, or the time taken to execute the computation graph), histogram- based summaries (e.g., the distribution of weight values in a neural network layer), or image-based summaries (e.g., a visualization of the ï¬ lter weights learned in a convolutional neural network). Typically computation graphs are set up so that Summary nodes are included to monitor various interesting values, and every so often during execution of the training graph, the set of sum- mary nodes are also executed, in addition to the normal set of nodes that are executed, and the client driver pro- gram writes the summary data to a log ï¬
1603.04467#45
1603.04467#47
1603.04467
[ "1502.02072" ]
1603.04467#47
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
le associated with the model training. The TensorBoard program is then conï¬ gured to watch this log ï¬ le for new summary records, and can display this summary information and how it changes over time (with the ability to select the measurement of â timeâ to be relative wall time since the beginning of the execution of the TensorFlow pro- gram, absolute time, or â stepsâ , a numeric measure of the number of graph executions that have occurred since the beginning of execution of the TensorFlow program). A screen shot of the visualization of summary values in TensorBoard is shown in Figure 11. # 9.2 Performance Tracing We also have an internal tool called EEG (not included in the initial open source release in November, 2015) that we use to collect and visualize very ï¬ ne-grained informa- tion about the exact ordering and performance character-
1603.04467#46
1603.04467#48
1603.04467
[ "1502.02072" ]
1603.04467#48
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
13 istics of the execution of TensorFlow graphs. This tool works in both our single machine and distributed imple- mentations, and is very useful for understanding the bot- tlenecks in the computation and communication patterns of a TensorFlow program. Traces are collected simultaneously on each machine in the system from a variety of sources including Linux kernel ftrace, our own lightweight thread tracing tools and the CUDA Proï¬ ling Tools Interface (CUPTI). With these logs we can reconstruct the execution of a dis- tributed training step with microsecond-level details of every thread-switch, CUDA kernel launch and DMA op- eration. Traces are combined in a visualization server which is designed to rapidly extract events in a speciï¬ ed timerange and summarize at appropriate detail level for the user-interface resolution. Any signiï¬ cant delays due to communication, synchronization or DMA-related stalls are identiï¬
1603.04467#47
1603.04467#49
1603.04467
[ "1502.02072" ]
1603.04467#49
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
ed and highlighted using arrows in the visualization. Initially the UI provides an overview of the entire trace, with only the most signiï¬ cant performance artifacts highlighted. As the user progressively zooms in, increasingly ï¬ ne resolution details are rendered. Figure 12 shows an example EEG visualization of a model being trained on a multi-core CPU platform. The top third of the screenshot shows TensorFlow operations being dispatched in parallel, according to the dataï¬ ow constraints. The bottom section of the trace shows how most operations are decomposed into multiple work- items which are executed concurrently in a thread pool. The diagonal arrows on the right hand size show where queueing delay is building up in the thread pool. Fig- ure 13 shows another EEG visualization with compu- tation mainly happening on the GPU. Host threads can be seen enqueuing TensorFlow GPU operations as they become runnable (the light blue thread pool), and back- ground housekeeping threads can be seen in other col- ors being migrated across processor cores. Once again, arrows show where threads are stalled on GPU to CPU transfers, or where ops experience signiï¬ cant queueing delay. Finally, Figure 14 shows a more detailed view which allows us to examine how Tensorï¬ ow GPU operators are assigned to multiple GPU streams. Whenever the dataï¬ ow graph allows parallel execution or data trans- fer we endeavour to expose the ordering constraints to the GPU device using streams and stream dependency primitives. # 10 Future Work We have several different directions for future work. We will continue to use TensorFlow to develop new and in-
1603.04467#48
1603.04467#50
1603.04467
[ "1502.02072" ]
1603.04467#50
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
14 teresting machine learning models for artiï¬ cial intelli- gence, and in the course of doing this, we may discover ways in which we will need to extend the basic Ten- sorFlow system. The open source community may also come up with new and interesting directions for the Ten- sorFlow implementation. One extension to the basic programming model that we are considering is a function mechanism, whereby a user can specify an entire subgraph of a TensorFlow computation to be a reusable component. In the imple- mentation we have designed, these functions can become reusable components even across different front-end lan- guages for TensorFlow, so that a user could deï¬ ne a func- tion using the Python front end, but then use that func- tion as a basic building block from within the C++ front- end. We are hopeful that this cross-language reusability will bootstrap a vibrant community of machine learning researchers publishing not just whole examples of their research, but also small reusable components from their work that can be reused in other contexts. We also have a number of concrete directions to im- prove the performance of TensorFlow. One such direc- tion is our initial work on a just-in-time compiler that can take a subgraph of a TensorFlow execution, perhaps with some runtime proï¬ ling information about the typi- cal sizes and shapes of tensors, and can generate an op- timized routine for this subgraph. This compiler will un- derstand the semantics of perform a number of optimiza- tions such as loop fusion, blocking and tiling for locality, specialization for particular shapes and sizes, etc.
1603.04467#49
1603.04467#51
1603.04467
[ "1502.02072" ]
1603.04467#51
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
We also imagine that a signiï¬ cant area for future work will be in improving the placement and node scheduling algorithms used to decide where different nodes will exe- cute, and when they should start executing. We have cur- rently implemented a number of heuristics in these sub- systems, and weâ d like to have the system instead learn to make good placement decisions (perhaps using a deep neural network, combined with a reinforcement learning objective function). # 11 Related Work There are many other systems that are comparable in various ways with TensorFlow. Theano [7], Torch [13], Caffe [26], Chainer [49] and the Computational Network Toolkit [54] are a few systems designed primarily for the training of neural networks. Each of these systems maps the computation onto a single machine, unlike the dis- tributed TensorFlow implementation. Like Theano and Chainer, TensorFlow supports symbolic differentiation, thus making it easier to deï¬ ne and work with gradient- based optimization algorithms. Like Caffe, TensorFlow has a core written in C++, simplifying the deployment
1603.04467#50
1603.04467#52
1603.04467
[ "1502.02072" ]
1603.04467#52
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Figure 12: EEG visualization of multi-threaded CPU operations (x-axis is time in µs). CPU Utiization 214,09% orp: as: i i TS nn is [| Job.workerleplica-tnask Olgpu:0 erecep498 I 3,730,000 3,736,000 13,740,000 3,745,000 3,750,000 13,755,000 13,760,000 3,765,000 Figure 13: EEG visualization of Inception training showing CPU and GPU activity. of trained models in a wide variety of production set- tings, including memory- and computation-constrained environments such as mobile devices. The TensorFlow system shares some design charac- teristics with its predecessor system, DistBelief [14], and with later systems with similar designs like Project Adam [10] and the Parameter Server project [33]. Like DistBelief and Project Adam, TensorFlow allows com- putations to be spread out across many computational de- vices across many machines, and allows users to specify machine learning models using relatively high-level de- scriptions. Unlike DistBelief and Project Adam, though, the general-purpose dataï¬ ow graph model in TensorFlow is more ï¬ exible and more amenable to expressing a wider variety of machine learning models and optimization al- gorithms. It also permits a signiï¬ cant simpliï¬ cation by allowing the expression of stateful parameter nodes as variables, and variable update operations that are just additional nodes in the graph; in contrast, DistBelief, Project Adam and the Parameter Server systems all have
1603.04467#51
1603.04467#53
1603.04467
[ "1502.02072" ]
1603.04467#53
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
15 â jou:workerrepica onask eps Jjobsworkerkepicadrasleeps0 gpu-omemeny seo IES gpu/steam:12 1 i souttsean19 1 sou0sveans4 aounsream's | aounsveam's | gouoreveam:t7 (IRIN gpu/sveam:18 [| gpu/steam:19 I [ gpudisteam26 veracoteaui2 kracoicou's I 4} I = â eracacput \ ai) leracoieputs sol eracoioputs aU veracetout? racoicours y U keraceiepu'9 Q mM A kiracoicnu i Ly iracaienu2t tot | wind an veracoieu22 eracoiepu23 | | t La lL | eracelepudt reseco â 7.684,000, res6.co 7,688,000 7,700,000 ee â at â ___- ti 7,702,000 CPU Uniasion 159,20% \ ri moa id on een oowie ' 13% ie = if mon to tay 2 WOU a | i a | iat wi 1H =i, alli be a fox oan = we otis - i} 13% : mo Ln ou me nn al ey 0 coun em mm me reagan 7,705,000, ryreqcoo 7,710,000 rriggo 7718000 7,716,000 Figure 14:
1603.04467#52
1603.04467#54
1603.04467
[ "1502.02072" ]
1603.04467#54
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Timeline of multi-stream GPU execution. whole separate parameter server subsystems devoted to communicating and updating parameter values. The Halide system [40] for expressing image pro- cessing pipelines uses a similar intermediate represen- tation to the TensorFlow dataï¬ ow graph. Unlike Ten- sorFlow, though, the Halide system actually has higher- level knowledge of the semantics of its operations and uses this knowledge to generate highly optimized pieces of code that combine multiple operations, taking into ac- count parallelism and locality. Halide runs the resulting computations only on a single machine, and not in a dis- tributed setting. In future work we are hoping to extend TensorFlow with a similar cross-operation dynamic com- pilation framework. that the system uses a single, optimized dataï¬ ow graph to represent the entire computation, and caches information about that graph on each device to minimize coordination overhead. Like Spark and Naiad, TensorFlow works best when there is sufï¬ cient RAM in the cluster to hold the working set of the computation. Iteration in TensorFlow uses a hybrid approach: multiple replicas of the same dataï¬ ow graph may be executing at once, while sharing the same set of variables. Replicas can share data asyn- chronously through the variables, or use synchronization mechanisms in the graph, such as queues, to operate syn- chronously. TensorFlow also supports iteration within a graph, which is a hybrid of CIEL and Naiad: for simplic- ity, each node ï¬ res only when all of its inputs are ready (like CIEL); but for efï¬ ciency the graph is represented as a static, cyclic dataï¬
1603.04467#53
1603.04467#55
1603.04467
[ "1502.02072" ]
1603.04467#55
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
ow (like Naiad). Like TensorFlow, several other distributed systems have been developed for executing dataï¬ ow graphs across a cluster. Dryad [24] and Flume [8] demon- strate how a complex workï¬ ow can be represented as a dataï¬ ow graph. CIEL [37] and Naiad [36] introduce generic support for data-dependent control ï¬ ow: CIEL represents iteration as a DAG that dynamically unfolds, whereas Naiad uses a static graph with cycles to support lower-latency iteration. Spark [55] is optimized for com- putations that access the same data repeatedly, using â re- silient distributed datasetsâ (RDDs), which are soft-state cached outputs of earlier computations. Dandelion [44] executes dataï¬ ow graphs across a cluster of heteroge- neous devices, including GPUs. TensorFlow uses a hy- brid dataï¬ ow model that borrows elements from each Its dataï¬ ow scheduler, which is the of these systems. component that chooses the next node to execute, uses the same basic algorithm as Dryad, Flume, CIEL, and Spark. Its distributed architecture is closest to Naiad, in
1603.04467#54
1603.04467#56
1603.04467
[ "1502.02072" ]
1603.04467#56
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
# 12 Conclusions We have described TensorFlow, a ï¬ exible data ï¬ ow- based programming model, as well as single machine and distributed implementations of this programming model. The system is borne from real-world experience in conducting research and deploying more than one hun- dred machine learning projects throughout a wide range of Google products and services. We have open sourced a version of TensorFlow, and hope that a vibrant shared community develops around the use of TensorFlow. We are excited to see how others outside of Google make use of TensorFlow in their own work.
1603.04467#55
1603.04467#57
1603.04467
[ "1502.02072" ]
1603.04467#57
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
16 # Acknowledgements The development of TensorFlow has beneï¬ tted enor- mously from the large and broad machine learning com- munity at Google, and in particular from the suggestions and contributions from rest of the Google Brain team and also from the hundreds of DistBelief and TensorFlow users within Google. Without a doubt, the usability and functionality of TensorFlow has been greatly expanded by listening to their feedback. Many individuals have contributed to TensorFlow and to its open source release, including John Gian- nandrea (for creating a supportive research environ- ment), Irina Kofman and Phing Turner (project manage- ment), Bill Gruber and David Westbrook (technical writ- ing), Dave Andersen, Anelia Angelova, Yaroslav Bu- latov, Jianmin Chen, Jerjou Cheng, George Dahl, An- drew Dai, Lucy Gao, mig Gerard, Stephan Gouws, Naveen Kumar, Geoffrey Hinton, Mrinal Kalarishnan, Anjuli Kannan, Yutaka Leon-Suematsu, Frank Li, Pe- ter Liu, Xiaobing Liu, Nishant Patil, Pierre Sermanet, Noam Shazeer, Jascha Sohl-dickstein, Philip Tucker, Yonghui Wu, Ke Yang, and Cliff Young (general con- tributions), Doug Fritz, Patrick Hurst, Dilip Krish- nan, Daniel Smilkov, James Wexler, Jimbo Wilson, Kanit Ham Wongsuphasawat, Cassandra Xia, and the Big Picture team (graph visualization), Chris Leary, Robert Springer and the Stream Executor team, Kayur Patel, Michael Piatek, and the coLab team, and the many others who have contributed to the TensorFlow design and code base.
1603.04467#56
1603.04467#58
1603.04467
[ "1502.02072" ]
1603.04467#58
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
# References [1] Mart´ın Abadi, Ashish Agarwal, Paul Barham, Eugene Brevdo, Zhifeng Chen, Craig Citro, Greg S. Corrado, Andy Davis, Jeffrey Dean, Matthieu Devin, Sanjay Ghe- mawat, Ian Goodfellow, Andrew Harp, Geoffrey Irv- ing, Michael Isard, Yangqing Jia, Rafal Jozefowicz, Lukasz Kaiser, Manjunath Kudlur, Josh Levenberg, Dan Man´e, Rajat Monga, Sherry Moore, Derek Murray, Chris Olah, Mike Schuster, Jonathon Shlens, Benoit Steiner, Ilya Sutskever, Kunal Talwar, Paul Tucker, Vincent Vanhoucke, Vijay Vasudevan, Fernanda Vi´egas, Oriol Vinyals, Pete Warden, Martin Wattenberg, Martin Wicke, Yuan Yu, and Xiaoqiang Zheng.
1603.04467#57
1603.04467#59
1603.04467
[ "1502.02072" ]
1603.04467#59
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
TensorFlow: Large-scale machine learning on heterogeneous systems, 2015. Soft- ware available from tensorï¬ ow.org. [2] Anelia Angelova, Alex Krizhevsky, and Vincent Van- houcke. Pedestrian detection with a large-ï¬ eld-of-view deep network. In Robotics and Automation (ICRA), 2015 IEEE International Conference on, pages 704â 711. IEEE, 2015. CalTech PDF. [3] Arvind and David E. Culler. science vol. 1, of computer Annual 1986. review chapter
1603.04467#58
1603.04467#60
1603.04467
[ "1502.02072" ]
1603.04467#60
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
17 225â 253. Dataï¬ ow Architectures, www.dtic.mil/cgi-bin/GetTRDoc?Location=U2& doc=GetTRDoc.pdf&AD=ADA166235. Executing a pro- gram on the MIT tagged-token dataï¬ ow architec- IEEE Trans. Comput., 39(3):300â 318, 1990. ture. dl.acm.org/citation.cfm?id=78583. [5] Jimmy Ba, Volodymyr Mnih, and Koray Kavukcuoglu. atten- 2014. Multiple tion. arxiv.org/abs/1412.7755. object arXiv recognition with preprint visual arXiv:1412.7755, [6] Franc¸oise Beaufays.
1603.04467#59
1603.04467#61
1603.04467
[ "1502.02072" ]
1603.04467#61
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
The neural behind Voice googleresearch.blogspot.com/2015/08/the-neural- networks-behind-google-voice.html. Google transcription, networks 2015. [7] James Bergstra, Olivier Breuleux, Fr´ed´eric Bastien, Pas- cal Lamblin, Razvan Pascanu, Guillaume Desjardins, Joseph Turian, David Warde-Farley, and Yoshua Bengio. Theano: A CPU and GPU math expression compiler. In Proceedings of the Python for scientiï¬ c computing con- ference (SciPy), volume 4, page 3.
1603.04467#60
1603.04467#62
1603.04467
[ "1502.02072" ]
1603.04467#62
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Austin, TX, 2010. UMontreal PDF. [8] Craig Chambers, Ashish Raniwala, Frances Perry, Stephen Adams, Robert R Henry, Robert Bradshaw, and Nathan Weizenbaum. easy, efï¬ - In ACM Sigplan No- cient data-parallel pipelines. tices, volume 45, pages 363â 375. ACM, 2010. re- search.google.com/pubs/archive/35650.pdf. [9] Sharan Chetlur, Cliff Woolley, Philippe Vandermer- sch, Jonathan Cohen, John Tran, Bryan Catanzaro, and Evan Shelhamer. cuDNN:
1603.04467#61
1603.04467#63
1603.04467
[ "1502.02072" ]
1603.04467#63
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Efï¬ cient primitives for deep learning. arXiv preprint arXiv:1410.0759, 2014. arxiv.org/abs/1410.0759. [10] Trishul Chilimbi, Yutaka Suzue, Johnson Apacible, and Project Adam: Building an Karthik Kalyanaraman. efï¬ cient and scalable deep learning training system. In 11th USENIX Symposium on Operating Systems Design and Implementation (OSDI 14), pages 571â 582, 2014. www.usenix.org/system/ï¬ les/conference/osdi14/osdi14- paper-chilimbi.pdf. lucrative web 2015. www.bloomberg.com/news/articles/2015-10-26/google- turning-its-lucrative-web-search-over-to-ai-machines. [12] Cliff Click.
1603.04467#62
1603.04467#64
1603.04467
[ "1502.02072" ]
1603.04467#64
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Global code motion/global value number- ing. In ACM SIGPLAN Notices, volume 30, pages 246â 257. ACM, 1995. courses.cs.washington.edu/courses/ cse501/06wi/reading/click-pldi95.pdf. [13] Ronan Collobert, Johnny Torch: A modular machine learning IDIAP, 2002. report, Samy Bengio, and Mari´ethoz. software library. infoscience.epï¬ .ch/record/82802/ï¬ les/rr02-46.pdf.
1603.04467#63
1603.04467#65
1603.04467
[ "1502.02072" ]
1603.04467#65
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Technical [14] Jeffrey Dean, Gregory S. Corrado, Rajat Monga, Kai Chen, Matthieu Devin, Quoc V. Le, Mark Z. Mao, Marcâ Aurelio Ranzato, Andrew Senior, Paul Tucker, Ke Yang, and Andrew Y. Ng. Large scale distributed deep networks. In NIPS, 2012. Google Research PDF. [15] Jack J Dongarra, Jeremy Du Croz, Sven Hammar- ling, and Iain S Duff. A set of level 3 basic lin- ACM Transactions on ear algebra subprograms. Mathematical Software (TOMS), 16(1):1â 17, 1990. www.maths.manchester.ac.uk/Ë sven/pubs/Level3BLAS- 1-TOMS16-90.pdf. [16] Andrea Frome, Greg S Corrado, Jonathon Shlens, Jeff Dean, Tomas Mikolov, et al. embedding deep Information Pro- re- Samy Bengio, DeVISE: A model. cessing Systems, search.google.com/pubs/archive/41473.pdf. visual-semantic In Advances in Neural pages 2121â 2129, 2013. [17] Javier Gonzalez-Dominguez, Ignacio Lopez-Moreno, Pe- dro J Moreno, and Joaquin Gonzalez-Rodriguez.
1603.04467#64
1603.04467#66
1603.04467
[ "1502.02072" ]
1603.04467#66
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Frame- by-frame language identiï¬ cation in short utterances using deep neural networks. Neural Networks, 64:49â 58, 2015. [18] Otavio Good. deep How Google a squeezes googleresearch.blogspot.com/2015/07/how-google- translate-squeezes-deep.html. learning onto phone, Translate 2015. [19] Ian J. Goodfellow, Yaroslav Bulatov, Julian Ibarz, Sacha Arnoud, and Vinay Shet.
1603.04467#65
1603.04467#67
1603.04467
[ "1502.02072" ]
1603.04467#67
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Multi-digit number recognition from Street View imagery using deep convolutional neu- In International Conference on Learning ral networks. Representations, 2014. arxiv.org/pdf/1312.6082. [20] Georg Heigold, Vincent Vanhoucke, Alan Senior, Patrick Nguyen, Marcâ Aurelio Ranzato, Matthieu Devin, and Jeffrey Dean. Multilingual acoustic models using dis- In Acoustics, Speech tributed deep neural networks. and Signal Processing (ICASSP), 2013 IEEE Interna- tional Conference on, pages 8619â
1603.04467#66
1603.04467#68
1603.04467
[ "1502.02072" ]
1603.04467#68
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
8623. IEEE, 2013. re- search.google.com/pubs/archive/40807.pdf. [21] Geoffrey E. Hinton, Li Deng, Dong Yu, George E. Dahl, Abdel-rahman Mohamed, Navdeep Jaitly, An- drew Senior, Vincent Vanhoucke, Patrick Nguyen, Tara N. Sainath, Deep for acoustic modeling in speech neural networks research recognition: IEEE Signal Process. Mag., 29(6):82â groups. 97, www.cs.toronto.edu/Ë gdahl/papers/ deepSpeechReviewSPM2012.pdf. [22] Sepp Hochreiter and J¨urgen Schmidhuber. Long short- term memory. Neural computation, 9(8):1735â 1780, 1997. ftp.idsia.ch/pub/juergen/lstm.pdf. [23] Sergey Ioffe and Christian Szegedy.
1603.04467#67
1603.04467#69
1603.04467
[ "1502.02072" ]
1603.04467#69
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Batch normaliza- tion: Accelerating deep network training by reducing internal covariate shift. CoRR, abs/1502.03167, 2015. arxiv.org/abs/1502.03167. [24] Michael Isard, Mihai Budiu, Yuan Yu, Andrew distributed building In ACM SIGOPS Operating Systems pages 59â 72. ACM, 2007. Birrell, and Dennis Fetterly. data-parallel blocks. Review, www.michaelisard.com/pubs/eurosys07.pdf. Dryad: programs from sequential volume 41,
1603.04467#68
1603.04467#70
1603.04467
[ "1502.02072" ]
1603.04467#70
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
18 [25] BenoË Ä±t Jacob, Ga¨el Guennebaud, et al. Eigen library for linear algebra. eigen.tuxfamily.org. [26] Yangqing Jia, Evan Shelhamer, Jeff Donahue, Sergey Karayev, Jonathan Long, Ross Girshick, Sergio Guadar- rama, and Trevor Darrell. Caffe: Convolutional archi- In Proceedings of tecture for fast feature embedding. the ACM International Conference on Multimedia, pages 675â
1603.04467#69
1603.04467#71
1603.04467
[ "1502.02072" ]
1603.04467#71
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
678. ACM, 2014. arxiv.org/pdf/1408.5093. [27] Andrej Karpathy, George Toderici, Sachin Shetty, and Li Fei- Tommy Leung, Rahul Sukthankar, Large-scale video classiï¬ cation with con- Fei. In Computer Vision volutional neural networks. and Pattern Recognition (CVPR), 2014 IEEE Con- ference on, pages 1725â 1732. re- search.google.com/pubs/archive/42455.pdf. [28] A Krizhevsky. Cuda-convnet, 2014. code.google.com/p/cuda-convnet/. [29] Alex Krizhevsky. One weird trick for paralleliz- arXiv preprint ing convolutional neural networks. arXiv:1404.5997, 2014. arxiv.org/abs/1404.5997. [30] Alex Krizhevsky, Vinod Nair, and Geoffrey Hinton. The CIFAR-10 dataset. www.cs.toronto.edu/Ë kriz/cifar.html.
1603.04467#70
1603.04467#72
1603.04467
[ "1502.02072" ]
1603.04467#72
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
[31] Quoc Le, Marcâ Aurelio Ranzato, Rajat Monga, Matthieu Devin, Greg Corrado, Kai Chen, Jeff Dean, and Andrew Ng. Building high-level features using large scale unsu- pervised learning. In ICMLâ 2012, 2012. Google Research PDF. [32] Yann LeCun, Corinna Cortes, and Christopher JC Burges. The MNIST database of handwritten digits, 1998. yann.lecun.com/exdb/mnist/. [33] Mu Li, Dave Andersen, and Alex Smola.
1603.04467#71
1603.04467#73
1603.04467
[ "1502.02072" ]
1603.04467#73
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Parameter server. parameterserver.org. [34] Chris J Maddison, Aja Huang, Ilya Sutskever, and David Silver. Move evaluation in Go using deep convolutional neural networks. arXiv preprint arXiv:1412.6564, 2014. arxiv.org/abs/1412.6564. [35] Tomas Mikolov, Kai Chen, Greg Corrado, and Jef- Efï¬ cient estimation of word representa- frey Dean. In International Conference tions in vector space. on Learning Representations: Workshops Track, 2013. arxiv.org/abs/1301.3781. [36] Derek G Murray, Frank McSherry, Rebecca Isaacs, Michael Isard, Paul Barham, and Mart´ın Abadi.
1603.04467#72
1603.04467#74
1603.04467
[ "1502.02072" ]
1603.04467#74
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Naiad: a timely dataï¬ ow system. In Proceedings of the Twenty- Fourth ACM Symposium on Operating Systems Princi- ples, pages 439â 455. ACM, 2013. Microsoft Research PDF. [37] Derek G. Murray, Malte Schwarzkopf, Christopher Smowton, Steven Smit, Anil Madhavapeddy, and Steven a universal execution engine for dis- Hand. tributed data-ï¬ ow computing. In Proceedings of the Ninth USENIX Symposium on Networked Systems Design and Implementation, 2011. Usenix PDF.
1603.04467#73
1603.04467#75
1603.04467
[ "1502.02072" ]
1603.04467#75
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
[38] Arun Nair, Praveen Srinivasan, Sam Blackwell, Cagdas Alcicek, Rory Fearon, Alessandro De Maria, Ve- davyas Panneershelvam, Mustafa Suleyman, Charles Beattie, Stig Petersen, et al. Massively parallel meth- arXiv preprint ods for deep reinforcement learning. arXiv:1507.04296, 2015. arxiv.org/abs/1507.04296. [39] CUDA Nvidia. Cublas library. NVIDIA Corpo- devel- ration, Santa Clara, California, 15, 2008. oper.nvidia.com/cublas. [40] Jonathan Ragan-Kelley, Connelly Barnes, Andrew Adams, Sylvain Paris, Fr´edo Durand, and Saman Ama- rasinghe.
1603.04467#74
1603.04467#76
1603.04467
[ "1502.02072" ]
1603.04467#76
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Halide: A language and compiler for optimiz- ing parallelism, locality, and recomputation in image pro- cessing pipelines. ACM SIGPLAN Notices, 48(6):519â 530, people.csail.mit.edu/fredo/tmp/Halide- 5min.pdf. [41] Bharath Ramsundar, Steven Kearnes, Patrick Riley, Dale Webster, David Konerding, and Vijay Pande. Massively multitask networks for drug discovery. arXiv preprint arXiv:1502.02072, 2015. arxiv.org/abs/1502.02072. [42] Benjamin Recht, Christopher Re, Stephen Wright, and Feng Niu.
1603.04467#75
1603.04467#77
1603.04467
[ "1502.02072" ]
1603.04467#77
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Hogwild: A lock-free approach to paral- In Advances in lelizing stochastic gradient descent. Neural Information Processing Systems, pages 693â 701, 2011. papers.nips.cc/paper/4390-hogwild-a-lock-free- approach-to-parallelizing-stochastic-gradient-descent. [43] Chuck Rosenberg. across step Improving Photo Search: 2013. A the googleresearch.blogspot.com/2013/06/improving- photo-search-step-across.html. semantic gap, [44] Christopher J Rossbach, Yuan Yu, Jon Currey, Jean- Philippe Martin, and Dennis Fetterly.
1603.04467#76
1603.04467#78
1603.04467
[ "1502.02072" ]
1603.04467#78
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Dandelion: a compiler and runtime for heterogeneous systems. In Proceedings of the Twenty-Fourth ACM Symposium on Operating Systems Principles, pages 49â 68. ACM, 2013. research-srv.microsoft.com/pubs/201110/sosp13- dandelion-ï¬ nal.pdf. [45] David E Rumelhart, Geoffrey E Hinton, and Ronald J back- Cognitive modeling, 5:3, 1988. Williams. propagating errors. www.cs.toronto.edu/ hinton/absps/naturebp.pdf. Learning representations by
1603.04467#77
1603.04467#79
1603.04467
[ "1502.02072" ]
1603.04467#79
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Kanishka Rao, Franc¸oise Beaufays, and Johan Schalkwyk. Google Voice Search: 2015. googleresearch.blogspot.com/2015/09/google-voice- search-faster-and-more.html. [47] Ilya Sutskever, Oriol Vinyals, and Quoc V. Le. Sequence In NIPS, to sequence learning with neural networks. 2014. papers.nips.cc/paper/5346-sequence-to-sequence- learning-with-neural.
1603.04467#78
1603.04467#80
1603.04467
[ "1502.02072" ]
1603.04467#80
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
[48] Christian Szegedy, Wei Liu, Yangqing Jia, Pierre Ser- manet, Scott Reed, Dragomir Anguelov, Dumitru Er- han, Vincent Vanhoucke, and Andrew Rabinovich. Go- In CVPRâ 2015, 2015. ing deeper with convolutions. arxiv.org/abs/1409.4842. 19 [49] Seiya Tokui. Chainer: A powerful, ï¬ exible and intuitive framework of neural networks. chainer.org.
1603.04467#79
1603.04467#81
1603.04467
[ "1502.02072" ]
1603.04467#81
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
[50] Vincent Vanhoucke. Speech recognition and deep learn- ing, 2015. googleresearch.blogspot.com/2012/08/speech- recognition-and-deep-learning.html. [51] Abhishek Verma, Luis Pedrosa, Madhukar Korupolu, David Oppenheimer, Eric Tune, and John Wilkes. Large-scale cluster management at Google with Borg. the Tenth European Conference In Proceedings of on Computer Systems, page 18. ACM, 2015. re- search.google.com/pubs/archive/43438.pdf. [52] O. Vinyals, L. Kaiser, T. Koo, S. Petrov, I. Sutskever, and G. Hinton.
1603.04467#80
1603.04467#82
1603.04467
[ "1502.02072" ]
1603.04467#82
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Grammar as a foreign language. Technical report, arXiv:1412.7449, 2014. arxiv.org/abs/1412.7449. [53] Oriol Vinyals, Meire Fortunato, Jaitly. arxiv.org/abs/1506.03134. Pointer networks. and Navdeep In NIPS, 2015. [54] Dong Yu, Adam Eversole, Mike Seltzer, Kaisheng Yao, Zhiheng Huang, Brian Guenter, Oleksii Kuchaiev, Yu Zhang, Frank Seide, Huaming Wang, et al. An introduction to computational networks and the com- Technical report, Tech. putational network toolkit.
1603.04467#81
1603.04467#83
1603.04467
[ "1502.02072" ]
1603.04467#83
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Rep. MSR, Microsoft Research, 2014, 2014. re- search.microsoft.com/apps/pubs/?id=226641. [55] Matei Zaharia, Mosharaf Chowdhury, Tathagata Das, Ankur Dave, Justin Ma, Murphy McCauley, Michael J Franklin, Scott Shenker, and Ion Stoica. Resilient distributed datasets: A fault-tolerant abstraction for In Proceedings of the in-memory cluster computing. 9th USENIX conference on Networked Systems De- sign and Implementation. USENIX Association, 2012. www.usenix.org/system/ï¬ les/conference/nsdi12/nsdi12- ï¬ nal138.pdf. [56] Matthew D.
1603.04467#82
1603.04467#84
1603.04467
[ "1502.02072" ]
1603.04467#84
TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems
Zeiler, Marcâ Aurelio Ranzato, Rajat Monga, Mark Mao, Ke Yang, Quoc Le, Patrick Nguyen, Andrew Senior, Vincent Vanhoucke, Jeff Dean, and On rectiï¬ ed linear units Geoffrey E. Hinton. In ICASSP, 2013. for speech processing. re- search.google.com/pubs/archive/40811.pdf.
1603.04467#83
1603.04467
[ "1502.02072" ]
1603.01360#0
Neural Architectures for Named Entity Recognition
6 1 0 2 r p A 7 ] L C . s c [ 3 v 0 6 3 1 0 . 3 0 6 1 : v i X r a Neural Architectures for Named Entity Recognition Guillaume Lampleâ Miguel Ballesterosâ £â Sandeep Subramanianâ Kazuya Kawakamiâ Chris Dyerâ â Carnegie Mellon University â £NLP Group, Pompeu Fabra University {glample,sandeeps,kkawakam,cdyer}@cs.cmu.edu, [email protected] # Abstract State-of-the-art named entity recognition sys- tems rely heavily on hand-crafted features and domain-speciï¬ c knowledge in order to learn effectively from the small, supervised training corpora that are available. In this paper, we introduce two new neural architecturesâ one based on bidirectional LSTMs and conditional random ï¬ elds, and the other that constructs and labels segments using a transition-based approach inspired by shift-reduce parsers. Our models rely on two sources of infor- mation about words: character-based word representations learned from the supervised corpus and unsupervised word representa- tions learned from unannotated corpora. Our models obtain state-of-the-art performance in NER in four languages without resorting to any language-speciï¬
1603.01360#1
1603.01360
[ "1603.03793" ]
1603.01360#1
Neural Architectures for Named Entity Recognition
c knowledge or resources such as gazetteers. 1 1 # 1 Introduction Named entity recognition (NER) is a challenging learning problem. One the one hand, in most lan- there is only a very small guages and domains, amount of supervised training data available. On the other, there are few constraints on the kinds of words that can be names, so generalizing from this small sample of data is difï¬ cult. As a result, carefully con- structed orthographic features and language-speciï¬ c knowledge resources, such as gazetteers, are widely used for solving this task. Unfortunately, language- speciï¬ c resources and features are costly to de- velop in new languages and new domains, making NER a challenge to adapt.
1603.01360#0
1603.01360#2
1603.01360
[ "1603.03793" ]
1603.01360#2
Neural Architectures for Named Entity Recognition
Unsupervised learning from unannotated corpora offers an alternative strat- egy for obtaining better generalization from small amounts of supervision. However, even systems that have relied extensively on unsupervised fea- tures (Collobert et al., 2011; Turian et al., 2010; Lin and Wu, 2009; Ando and Zhang, 2005b, in- ter alia) have used these to augment, rather than replace, hand-engineered features (e.g., knowledge about capitalization patterns and character classes in a particular language) and specialized knowledge re- sources (e.g., gazetteers). In this paper, we present neural architectures for NER that use no language-speciï¬ c resources or features beyond a small amount of supervised training data and unlabeled corpora. Our mod- els are designed to capture two intuitions. First, since names often consist of multiple tokens, rea- soning jointly over tagging decisions for each to- ken is important. We compare two models here, (i) a bidirectional LSTM with a sequential condi- tional random layer above it (LSTM-CRF; §2), and (ii) a new model that constructs and labels chunks of input sentences using an algorithm inspired by transition-based parsing with states represented by stack LSTMs (S-LSTM; §3). Second, token-level evidence for â
1603.01360#1
1603.01360#3
1603.01360
[ "1603.03793" ]
1603.01360#3
Neural Architectures for Named Entity Recognition
being a nameâ includes both ortho- graphic evidence (what does the word being tagged as a name look like?) and distributional evidence (where does the word being tagged tend to oc- cur in a corpus?). To capture orthographic sen- sitivity, we use character-based word representa- tion model (Ling et al., 2015b) to capture distribu- tional sensitivity, we combine these representations with distributional representations (Mikolov et al., 2013b). Our word representations combine both of these, and dropout training is used to encourage the model to learn to trust both sources of evidence (§4). 1The code of the LSTM-CRF and Stack-LSTM NER https://github.com/ systems at glample/tagger and https://github.com/clab/ stack-lstm-ner Experiments in English, Dutch, German, and Spanish show that we are able to obtain state- of-the-art NER performance with the LSTM-CRF model in Dutch, German, and Spanish, and very near the state-of-the-art in English without any hand-engineered features or gazetteers (§5). The transition-based algorithm likewise surpasses the best previously published results in several lan- guages, although it performs less well than the LSTM-CRF model. # 2 LSTM-CRF Model We provide a brief description of LSTMs and CRFs, and present a hybrid tagging architecture. This ar- chitecture is similar to the ones presented by Col- lobert et al. (2011) and Huang et al. (2015). # 2.1 LSTM Recurrent neural networks (RNNs) are a family of neural networks that operate on sequential data. They take as input a sequence of vectors (x1, x2, . . . , xn) sequence (h1, h2, . . . , hn) that represents some information about the sequence at every step in the input. Although RNNs can, in theory, learn long depen- dencies, in practice they fail to do so and tend to be biased towards their most recent inputs in the sequence (Bengio et al., 1994). Long Short-term Memory Networks (LSTMs) have been designed to combat this issue by incorporating a memory-cell and have been shown to capture long-range depen- dencies.
1603.01360#2
1603.01360#4
1603.01360
[ "1603.03793" ]
1603.01360#4
Neural Architectures for Named Entity Recognition
They do so using several gates that control the proportion of the input to give to the memory cell, and the proportion from the previous state to forget (Hochreiter and Schmidhuber, 1997). We use the following implementation: ip = o(Waixt + Washe-1 4 ec = (1â i) © eit Weicr-1 4 bi) i, © tanh(Waext + Warchiâ 1 + be) 04 = 0(Waoxt + Wroliâ 1 + Weotr + bo) hy = 0; © tanh(cz), where o is the element-wise sigmoid function, and © is the element-wise product. For a given sentence (x1, x2, . . . , xn) containing n words, each represented as a d-dimensional vector, â
1603.01360#3
1603.01360#5
1603.01360
[ "1603.03793" ]
1603.01360#5
Neural Architectures for Named Entity Recognition
â ht of the left an LSTM computes a representation context of the sentence at every word t. Naturally, â â ht generating a representation of the right context as well should add useful information. This can be achieved using a second LSTM that reads the same sequence in reverse. We will refer to the former as the forward LSTM and the latter as the backward LSTM. These are two distinct networks with differ- ent parameters. This forward and backward LSTM pair is referred to as a bidirectional LSTM (Graves and Schmidhuber, 2005). The representation of a word using this model is obtained by concatenating its left and right context â
1603.01360#4
1603.01360#6
1603.01360
[ "1603.03793" ]
1603.01360#6
Neural Architectures for Named Entity Recognition
â â â representations, ht = [ ht]. These representa- ht; tions effectively include a representation of a word in context, which is useful for numerous tagging ap- plications. # 2.2 CRF Tagging Models A very simpleâ but surprisingly effectiveâ tagging model is to use the htâ s as features to make indepen- dent tagging decisions for each output yt (Ling et al., 2015b). Despite this modelâ s success in simple problems like POS tagging, its independent classiï¬ - cation decisions are limiting when there are strong dependencies across output labels. NER is one such task, since the â
1603.01360#5
1603.01360#7
1603.01360
[ "1603.03793" ]
1603.01360#7
Neural Architectures for Named Entity Recognition
grammarâ that characterizes inter- pretable sequences of tags imposes several hard con- straints (e.g., I-PER cannot follow B-LOC; see §2.4 for details) that would be impossible to model with independence assumptions. Therefore, instead of modeling tagging decisions independently, we model them jointly using a con- ditional random ï¬ eld (Lafferty et al., 2001). For an input sentence X = (x1, x2, . . . , xn), we consider P to be the matrix of scores output by the bidirectional LSTM network. P is of size n à k, where k is the number of distinct tags, and Pi,j cor- responds to the score of the jth tag of the ith word in a sentence. For a sequence of predictions y = (y1, y2, . . . , yn),
1603.01360#6
1603.01360#8
1603.01360
[ "1603.03793" ]
1603.01360#8
Neural Architectures for Named Entity Recognition
we deï¬ ne its score to be n n s(X,y) = S- Ay. yigs + > Pry: i=0 i=1 where A is a matrix of transition scores such that Ai,j represents the score of a transition from the tag i to tag j. y0 and yn are the start and end tags of a sentence, that we add to the set of possi- ble tags. A is therefore a square matrix of size k +2. A softmax over all possible tag sequences yields a probability for the sequence y: es(&y) Vyerx es 9)â P(y|X) =
1603.01360#7
1603.01360#9
1603.01360
[ "1603.03793" ]
1603.01360#9
Neural Architectures for Named Entity Recognition
During training, we maximize the log-probability of the correct tag sequence: log(p(y|X)) = s(X,y) â log { $2 es) yeYx = 5(X,y) â logadd s(XÂ¥), () yeYx where YX represents all possible tag sequences (even those that do not verify the IOB format) for a sentence X. From the formulation above, it is ev- ident that we encourage our network to produce a valid sequence of output labels. While decoding, we predict the output sequence that obtains the maxi- mum score given by: y* = argmax s(X,y). (2) yeYx Since we are only modeling bigram interactions between outputs, both the summation in Eq. 1 and the maximum a posteriori sequence yâ in Eq. 2 can be computed using dynamic programming. # 2.3 Parameterization and Training The scores associated with each tagging decision for each token (i.e., the P;â s) are defined to be the dot product between the embedding of a word- in-context computed with a bidirectional LSTMâ exactly the same as the POS tagging model of and these are combined with bigram compatibility scores (i.e., the Ay y's). This archi- tecture is shown in figure [I] Circles represent ob- served variables, diamonds are deterministic func- tions of their parents, and double circles are random variables. CRF Layer 4 Bi-LSTM encoder Word embeddings Figure 1: Main architecture of the network.
1603.01360#8
1603.01360#10
1603.01360
[ "1603.03793" ]
1603.01360#10
Neural Architectures for Named Entity Recognition
Word embeddings are given to a bidirectional LSTM. li represents the word i and its left context, ri represents the word i and its right context. Concatenating these two vectors yields a representation of the word i in its context, ci. The parameters of this model are thus the matrix of bigram compatibility scores A, and the parame- ters that give rise to the matrix P, namely the param- eters of the bidirectional LSTM, the linear feature weights, and the word embeddings. As in part 2.2, let xi denote the sequence of word embeddings for every word in a sentence, and yi be their associated tags. We return to a discussion of how the embed- dings xi are modeled in Section 4. The sequence of word embeddings is given as input to a bidirectional LSTM, which returns a representation of the left and right context for each word as explained in 2.1. These representations are concatenated (ci) and linearly projected onto a layer whose size is equal to the number of distinct tags. Instead of using the softmax output from this layer, we use a CRF as pre- viously described to take into account neighboring tags, yielding the ï¬ nal predictions for every word yi. Additionally, we observed that adding a hidden layer between ci and the CRF layer marginally im- proved our results. All results reported with this model incorporate this extra-layer. The parameters are trained to maximize Eq. 1 of observed sequences of NER tags in an annotated corpus, given the ob- served words. # 2.4 Tagging Schemes The task of named entity recognition is to assign a named entity label to every word in a sentence. A single named entity could span several tokens within a sentence. Sentences are usually represented in the IOB format (Inside, Outside, Beginning) where ev- ery token is labeled as B-label if the token is the beginning of a named entity, I-label if it is inside a named entity but not the ï¬ rst token within the named entity, or O otherwise. However, we de- cided to use the IOBES tagging scheme, a variant of IOB commonly used for named entity recognition, which encodes information about singleton entities (S) and explicitly marks the end of named entities (E). Using this scheme, tagging a word as I-label with high-conï¬
1603.01360#9
1603.01360#11
1603.01360
[ "1603.03793" ]
1603.01360#11
Neural Architectures for Named Entity Recognition
dence narrows down the choices for the subsequent word to I-label or E-label, however, the IOB scheme is only capable of determining that the subsequent word cannot be the interior of an- other label. Ratinov and Roth (2009) and Dai et al. (2015) showed that using a more expressive tagging scheme like IOBES improves model performance marginally. However, we did not observe a signif- icant improvement over the IOB tagging scheme. # 3 Transition-Based Chunking Model As an alternative to the LSTM-CRF discussed in the previous section, we explore a new architecture that chunks and labels a sequence of inputs using an algorithm similar to transition-based dependency parsing. This model directly constructs representa- tions of the multi-token names (e.g., the name Mark Watney is composed into a single representation). This model relies on a stack data structure to in- crementally construct chunks of the input. To ob- tain representations of this stack used for predict- ing subsequent actions, we use the Stack-LSTM pre- sented by Dyer et al. (2015), in which the LSTM is augmented with a â stack pointer.â While sequen- tial LSTMs model sequences from left to right, stack LSTMs permit embedding of a stack of objects that are both added to (using a push operation) and re- moved from (using a pop operation). This allows the Stack-LSTM to work like a stack that maintains a â summary embeddingâ
1603.01360#10
1603.01360#12
1603.01360
[ "1603.03793" ]
1603.01360#12
Neural Architectures for Named Entity Recognition
of its contents. We refer to this model as Stack-LSTM or S-LSTM model for simplicity. Finally, we refer interested readers to the original paper (Dyer et al., 2015) for details about the Stack- LSTM model since in this paper we merely use the same architecture through a new transition-based al- gorithm presented in the following Section. # 3.1 Chunking Algorithm We designed a transition inventory which is given in Figure 2 that is inspired by transition-based parsers, in particular the arc-standard parser of Nivre (2004). In this algorithm, we make use of two stacks (des- ignated output and stack representing, respectively, completed chunks and scratch space) and a buffer that contains the words that have yet to be processed. The transition inventory contains the following tran- sitions: The SHIFT transition moves a word from the buffer to the stack, the OUT transition moves a word from the buffer directly into the output stack while the REDUCE(y) transition pops all items from the top of the stack creating a â chunk,â labels this with label y, and pushes a representation of this chunk onto the output stack. The algorithm com- pletes when the stack and buffer are both empty. The algorithm is depicted in Figure 2, which shows the sequence of operations required to process the sen- tence Mark Watney visited Mars.
1603.01360#11
1603.01360#13
1603.01360
[ "1603.03793" ]
1603.01360#13
Neural Architectures for Named Entity Recognition
The model is parameterized by deï¬ ning a prob- ability distribution over actions at each time step, given the current contents of the stack, buffer, and output, as well as the history of actions taken. Fol- lowing Dyer et al. (2015), we use stack LSTMs to compute a ï¬ xed dimensional embedding of each of these, and take a concatenation of these to ob- tain the full algorithm state. This representation is used to deï¬ ne a distribution over the possible ac- tions that can be taken at each time step. The model is trained to maximize the conditional probability of sequences of reference actions (extracted from a la- beled training corpus) given the input sentences. To label a new input sequence at test time, the maxi- mum probability action is chosen greedily until the algorithm reaches a termination state.
1603.01360#12
1603.01360#14
1603.01360
[ "1603.03793" ]
1603.01360#14
Neural Architectures for Named Entity Recognition
Although this is not guaranteed to ï¬ nd a global optimum, it is ef- fective in practice. Since each token is either moved directly to the output (1 action) or ï¬ rst to the stack and then the output (2 actions), the total number of actions for a sequence of length n is maximally 2n. It is worth noting that the nature of this algorithm Outt O O O Stackt S (u, u), . . . , (v, v), S B S Action Buffert (u, u), B SHIFT REDUCE(y) (u, u), B OUT Outt+1 O g(u, . . . , v, ry), O S S g(u, râ ), O Stackt+1 Buffert+1 (u, u), S B B B Segments â (u . . . v, y) â Figure 2:
1603.01360#13
1603.01360#15
1603.01360
[ "1603.03793" ]
1603.01360#15
Neural Architectures for Named Entity Recognition
Transitions of the Stack-LSTM model indicating the action applied and the resulting state. Bold symbols indicate (learned) embeddings of words and relations, script symbols indicate the corresponding words and relations. Transition SHIFT SHIFT REDUCE(PER) OUT SHIFT REDUCE(LOC) Output [] [] [] [(Mark Watney)-PER] [(Mark Watney)-PER, visited] [(Mark Watney)-PER, visited] [(Mark Watney)-PER, visited, (Mars)-LOC] Stack [] [Mark] [Mark, Watney] [] [] [Mars] [] Buffer [Mark, Watney, visited, Mars] [Watney, visited, Mars] [visited, Mars] [visited, Mars] [Mars] [] [] Segment (Mark Watney)-PER (Mars)-LOC Figure 3: Transition sequence for Mark Watney visited Mars with the Stack-LSTM model. model makes it agnostic to the tagging scheme used since it directly predicts labeled chunks. # 3.2 Representing Labeled Chunks When the REDUCE(y) operation is executed, the al- gorithm shifts a sequence of tokens (together with their vector embeddings) from the stack to the out- put buffer as a single completed chunk. To compute an embedding of this sequence, we run a bidirec- tional LSTM over the embeddings of its constituent tokens together with a token representing the type of the chunk being identiï¬ ed (i.e., y). This function is given as g(u, . . . , v, ry), where ry is a learned em- bedding of a label type. Thus, the output buffer con- tains a single vector representation for each labeled chunk that is generated, regardless of its length. Embedding from Lookup tabl lookup table ookup table t © © Embedding from characters Concatenation - Final embedding Tars # Input Word Embeddings The input layers to both of our models are vector representations of individual words. Learning inde- pendent representations for word types from the lim- ited NER training data is a difï¬
1603.01360#14
1603.01360#16
1603.01360
[ "1603.03793" ]
1603.01360#16
Neural Architectures for Named Entity Recognition
cult problem: there are simply too many parameters to reliably estimate. Since many languages have orthographic or mor- phological evidence that something is a name (or not a name), we want representations that are sen- sitive to the spelling of words. We therefore use a model that constructs representations of words from representations of the characters they are composed of (4.1). Our second intuition is that names, which may individually be quite varied, appear in regular contexts in large corpora. Therefore we use embed- Figure 4: The character embeddings of the word â Marsâ
1603.01360#15
1603.01360#17
1603.01360
[ "1603.03793" ]
1603.01360#17
Neural Architectures for Named Entity Recognition
are given to a bidirectional LSTMs. We concatenate their last out- puts to an embedding from a lookup table to obtain a represen- tation for this word. dings learned from a large corpus that are sensitive to word order (4.2). Finally, to prevent the models from depending on one representation or the other too strongly, we use dropout training and ï¬ nd this is crucial for good generalization performance (4.3). # 4.1 Character-based models of words An important distinction of our work from most previous approaches is that we learn character-level features while training instead of hand-engineering preï¬
1603.01360#16
1603.01360#18
1603.01360
[ "1603.03793" ]
1603.01360#18
Neural Architectures for Named Entity Recognition
x and sufï¬ x information about words. Learn- ing character-level embeddings has the advantage of learning representations speciï¬ c to the task and do- main at hand. They have been found useful for mor- phologically rich languages and to handle the out- of-vocabulary problem for tasks like part-of-speech tagging and language modeling (Ling et al., 2015b) or dependency parsing (Ballesteros et al., 2015). Figure 4 describes our architecture to generate a word embedding for a word from its characters. A character lookup table initialized at random contains an embedding for every character. The character embeddings corresponding to every character in a word are given in direct and reverse order to a for- ward and a backward LSTM. The embedding for a word derived from its characters is the concatenation of its forward and backward representations from the bidirectional LSTM. This character-level repre- sentation is then concatenated with a word-level rep- resentation from a word lookup-table. During test- ing, words that do not have an embedding in the lookup table are mapped to a UNK embedding. To train the UNK embedding, we replace singletons with the UNK embedding with a probability 0.5. In all our experiments, the hidden dimension of the for- ward and backward character LSTMs are 25 each, which results in our character-based representation of words being of dimension 50. Recurrent models like RNNs and LSTMs are ca- pable of encoding very long sequences, however, they have a representation biased towards their most recent inputs.
1603.01360#17
1603.01360#19
1603.01360
[ "1603.03793" ]
1603.01360#19
Neural Architectures for Named Entity Recognition
As a result, we expect the ï¬ nal rep- resentation of the forward LSTM to be an accurate representation of the sufï¬ x of the word, and the ï¬ - nal state of the backward LSTM to be a better rep- resentation of its preï¬ x. Alternative approachesâ most notably like convolutional networksâ have been proposed to learn representations of words from their characters (Zhang et al., 2015; Kim et al., 2015). However, convnets are designed to discover position-invariant features of their inputs. While this is appropriate for many problems, e.g., image recog- nition (a cat can appear anywhere in a picture), we argue that important information is position depen- dent (e.g., preï¬
1603.01360#18
1603.01360#20
1603.01360
[ "1603.03793" ]
1603.01360#20
Neural Architectures for Named Entity Recognition
xes and sufï¬ xes encode different in- formation than stems), making LSTMs an a priori better function class for modeling the relationship between words and their characters. # 4.2 Pretrained embeddings As in Collobert et al. (2011), we use pretrained word embeddings to initialize our lookup table. We observe signiï¬ cant improvements using pretrained word embeddings over randomly initialized ones. Embeddings are pretrained using skip-n-gram (Ling et al., 2015a), a variation of word2vec (Mikolov et al., 2013a) that accounts for word order.
1603.01360#19
1603.01360#21
1603.01360
[ "1603.03793" ]
1603.01360#21
Neural Architectures for Named Entity Recognition
These em- beddings are ï¬ ne-tuned during training. Word embeddings for Spanish, Dutch, German and English are trained using the Spanish Gigaword version 3, the Leipzig corpora collection, the Ger- man monolingual training data from the 2010 Ma- chine Translation Workshop and the English Giga- word version 4 (with the LA Times and NY Times portions removed) respectively.2 We use an embed- ding dimension of 100 for English, 64 for other lan- guages, a minimum word frequency cutoff of 4, and a window size of 8. # 4.3 Dropout training Initial experiments showed that character-level em- beddings did not improve our overall performance when used in conjunction with pretrained word rep- resentations. To encourage the model to depend on both representations, we use dropout training (Hin- ton et al., 2012), applying a dropout mask to the ï¬ nal embedding layer just before the input to the bidirec- tional LSTM in Figure 1. We observe a signiï¬ cant improvement in our modelâ s performance after us- ing dropout (see table 5). # 5 Experiments This section presents the methods we use to train our models, the results we obtained on various tasks and the impact of our networksâ
1603.01360#20
1603.01360#22
1603.01360
[ "1603.03793" ]
1603.01360#22
Neural Architectures for Named Entity Recognition
conï¬ guration on model performance. # 5.1 Training For both models presented, we train our networks using the back-propagation algorithm updating our parameters on every training example, one at a time, using stochastic gradient descent (SGD) with 2(Graff, 2011; Biemann et al., 2007; Callison-Burch et al., 2010; Parker et al., 2009) a learning rate of 0.01 and a gradient clipping of 5.0. Several methods have been proposed to enhance the performance of SGD, such as Adadelta (Zeiler, 2012) or Adam (Kingma and Ba, 2014). Although we observe faster convergence using these methods, none of them perform as well as SGD with gradient clipping.
1603.01360#21
1603.01360#23
1603.01360
[ "1603.03793" ]
1603.01360#23
Neural Architectures for Named Entity Recognition
Our LSTM-CRF model uses a single layer for the forward and backward LSTMs whose dimen- sions are set to 100. Tuning this dimension did not signiï¬ cantly impact model performance. We set the dropout rate to 0.5. Using higher rates nega- tively impacted our results, while smaller rates led to longer training time. The stack-LSTM model uses two layers each of dimension 100 for each stack. The embeddings of the actions used in the composition functions have 16 dimensions each, and the output embedding is of dimension 20. We experimented with different dropout rates and reported the scores using the best dropout rate for each language.3 It is a greedy model that apply locally optimal actions until the entire sentence is processed, further improvements might be obtained with beam search (Zhang and Clark, 2011) or training with exploration (Ballesteros et al., 2016).
1603.01360#22
1603.01360#24
1603.01360
[ "1603.03793" ]
1603.01360#24
Neural Architectures for Named Entity Recognition
# 5.2 Data Sets We test our model on different datasets for named entity recognition. To demonstrate our modelâ s ability to generalize to different languages, we present results on the CoNLL-2002 and CoNLL- 2003 datasets (Tjong Kim Sang, 2002; Tjong Kim Sang and De Meulder, 2003) that contain in- dependent named entity labels for English, Span- ish, German and Dutch. All datasets contain four different types of named entities: locations, per- sons, organizations, and miscellaneous entities that do not belong in any of the three previous cate- gories. Although POS tags were made available for all datasets, we did not include them in our models. We did not perform any dataset preprocessing, apart from replacing every digit with a zero in the English NER dataset. 3English (D=0.2), German, Spanish and Dutch (D=0.3) # 5.3 Results Table 1 presents our comparisons with other mod- els for named entity recognition in English. To make the comparison between our model and oth- ers fair, we report the scores of other models with and without the use of external labeled data such as gazetteers and knowledge bases. Our models do not use gazetteers or any external labeled resources. The best score reported on this task is by Luo et al. (2015). They obtained a F1 of 91.2 by jointly model- ing the NER and entity linking tasks (Hoffart et al., 2011). Their model uses a lot of hand-engineered features including spelling features, WordNet clus- ters, Brown clusters, POS tags, chunks tags, as well as stemming and external knowledge bases like Freebase and Wikipedia. Our LSTM-CRF model outperforms all other systems, including the ones us- ing external labeled data like gazetteers. Our Stack- LSTM model also outperforms all previous models that do not incorporate external features, apart from the one presented by Chiu and Nichols (2015). Tables 2, 3 and 4 present our results on NER for German, Dutch and Spanish respectively in compar- ison to other models. On these three languages, the LSTM-CRF model signiï¬ cantly outperforms all pre- vious methods, including the ones using external la- beled data.
1603.01360#23
1603.01360#25
1603.01360
[ "1603.03793" ]
1603.01360#25
Neural Architectures for Named Entity Recognition
The only exception is Dutch, where the model of Gillick et al. (2015) can perform better by leveraging the information from other NER datasets. The Stack-LSTM also consistently presents state- the-art (or close to) results compared to systems that do not use external data. As we can see in the tables, the Stack-LSTM model is more dependent on character-based repre- sentations to achieve competitive performance; we hypothesize that the LSTM-CRF model requires less orthographic information since it gets more contex- tual information out of the bidirectional LSTMs; however, the Stack-LSTM model consumes the words one by one and it just relies on the word rep- resentations when it chunks words.
1603.01360#24
1603.01360#26
1603.01360
[ "1603.03793" ]
1603.01360#26
Neural Architectures for Named Entity Recognition
# 5.4 Network architectures Our models had several components that we could tweak to understand their impact on the overall per- formance. We explored the impact that the CRF, the character-level representations, pretraining of our Model Collobert et al. (2011)* Lin and Wu (2009) Lin and Wu (2009)* Huang et al. (2015)* Passos et al. (2014) Passos et al. (2014)* Luo et al. (2015)* + gaz Luo et al. (2015)* + gaz + linking Chiu and Nichols (2015) Chiu and Nichols (2015)* LSTM-CRF (no char) LSTM-CRF S-LSTM (no char) S-LSTM F1 89.59 83.78 90.90 90.10 90.05 90.90 89.9 91.2 90.69 90.77 90.20 90.94 87.96 90.33 Table 1: English NER results (CoNLL-2003 test set). * indi- cates models trained with the use of external labeled data Model Florian et al. (2003)* Ando and Zhang (2005a) Qi et al. (2009) Gillick et al. (2015) Gillick et al. (2015)* LSTM-CRF â no char LSTM-CRF S-LSTM â no char S-LSTM F1 72.41 75.27 75.72 72.08 76.22 75.06 78.76 65.87 75.66
1603.01360#25
1603.01360#27
1603.01360
[ "1603.03793" ]
1603.01360#27
Neural Architectures for Named Entity Recognition
Table 2: German NER results (CoNLL-2003 test set). * indi- cates models trained with the use of external labeled data Model Carreras et al. (2002) Nothman et al. (2013) Gillick et al. (2015) Gillick et al. (2015)* LSTM-CRF â no char LSTM-CRF S-LSTM â no char S-LSTM F1 77.05 78.6 78.08 82.84 73.14 81.74 69.90 79.88
1603.01360#26
1603.01360#28
1603.01360
[ "1603.03793" ]
1603.01360#28
Neural Architectures for Named Entity Recognition
Table 3: Dutch NER (CoNLL-2002 test set). * indicates mod- els trained with the use of external labeled data Model Carreras et al. (2002)* Santos and GuimarË aes (2015) Gillick et al. (2015) Gillick et al. (2015)* LSTM-CRF â no char LSTM-CRF S-LSTM â no char S-LSTM F1 81.39 82.21 81.83 82.95 83.44 85.75 79.46 83.93
1603.01360#27
1603.01360#29
1603.01360
[ "1603.03793" ]
1603.01360#29
Neural Architectures for Named Entity Recognition
Table 4: Spanish NER (CoNLL-2002 test set). * indicates mod- els trained with the use of external labeled data word embeddings and dropout had on our LSTM- CRF model. We observed that pretraining our word embeddings gave us the biggest improvement in overall performance of +7.31 in F1. The CRF layer gave us an increase of +1.79, while using dropout resulted in a difference of +1.17 and ï¬ nally learn- ing character-level word embeddings resulted in an increase of about +0.74. For the Stack-LSTM we performed a similar set of experiments. Results with different architectures are given in table 5. Model LSTM LSTM-CRF LSTM-CRF LSTM-CRF LSTM-CRF LSTM-CRF S-LSTM S-LSTM S-LSTM S-LSTM S-LSTM Variant char + dropout + pretrain char + dropout pretrain pretrain + char pretrain + dropout pretrain + dropout + char char + dropout pretrain pretrain + char pretrain + dropout pretrain + dropout + char F1 89.15 83.63 88.39 89.77 90.20 90.94 80.88 86.67 89.32 87.96 90.33 Table 5: English NER results with our models, using differ- ent conï¬ gurations. â pretrainâ refers to models that include pre- trained word embeddings, â charâ refers to models that include character-based modeling of words, â dropoutâ refers to models that include dropout rate. # 6 Related Work In the CoNLL-2002 shared task, Carreras et al. (2002) obtained among the best results on both Dutch and Spanish by combining several small ï¬ xed-depth decision trees. Next year, in the CoNLL- 2003 Shared Task, Florian et al. (2003) obtained the best score on German by combining the output of four diverse classiï¬
1603.01360#28
1603.01360#30
1603.01360
[ "1603.03793" ]
1603.01360#30
Neural Architectures for Named Entity Recognition
ers. Qi et al. (2009) later im- proved on this with a neural network by doing unsu- pervised learning on a massive unlabeled corpus. Several other neural architectures have previously been proposed for NER. For instance, Collobert et al. (2011) uses a CNN over a sequence of word em- beddings with a CRF layer on top. This can be thought of as our ï¬ rst model without character-level embeddings and with the bidirectional LSTM be- ing replaced by a CNN. More recently, Huang et al. (2015) presented a model similar to our LSTM-CRF, but using hand-crafted spelling features. Zhou and Xu (2015) also used a similar model and adapted it to the semantic role labeling task. Lin and Wu (2009) used a linear chain CRF with L2 regular- ization, they added phrase cluster features extracted from the web data and spelling features. Passos et al. (2014) also used a linear chain CRF with spelling features and gazetteers. Language independent NER models like ours have also been proposed in the past.
1603.01360#29
1603.01360#31
1603.01360
[ "1603.03793" ]
1603.01360#31
Neural Architectures for Named Entity Recognition
Cucerzan and Yarowsky (1999; 2002) present semi-supervised bootstrapping algorithms for named entity recogni- tion by co-training character-level (word-internal) and token-level (context) features. Eisenstein et al. (2011) use Bayesian nonparametrics to construct a database of named entities in an almost unsu- pervised setting. Ratinov and Roth (2009) quanti- tatively compare several approaches for NER and build their own supervised model using a regular- ized average perceptron and aggregating context in- formation. Finally, there is currently a lot of interest in mod- els for NER that use letter-based representations. Gillick et al. (2015) model the task of sequence- labeling as a sequence to sequence learning prob- lem and incorporate character-based representations into their encoder model. Chiu and Nichols (2015) employ an architecture similar to ours, but instead use CNNs to learn character-level features, in a way similar to the work by Santos and GuimarË aes (2015).
1603.01360#30
1603.01360#32
1603.01360
[ "1603.03793" ]
1603.01360#32
Neural Architectures for Named Entity Recognition
# 7 Conclusion This paper presents two neural architectures for se- quence labeling that provide the best NER results ever reported in standard evaluation settings, even compared with models that use external resources, such as gazetteers. A key aspect of our models are that they model output label dependencies, either via a simple CRF architecture, or using a transition-based algorithm to explicitly construct and label chunks of the in- put. Word representations are also crucially impor- tant for success: we use both pre-trained word rep- resentations and â character-basedâ representations that capture morphological and orthographic infor- mation. To prevent the learner from depending too heavily on one representation class, dropout is used.
1603.01360#31
1603.01360#33
1603.01360
[ "1603.03793" ]
1603.01360#33
Neural Architectures for Named Entity Recognition
# Acknowledgments This work was sponsored in part by the Defense Advanced Research Projects Agency (DARPA) Information Innovation Ofï¬ ce (I2O) under the Low Resource Languages for Emergent Incidents (LORELEI) program issued by DARPA/I2O under Contract No. HR0011-15-C-0114. Miguel Balles- teros is supported by the European Commission un- der the contract numbers FP7-ICT-610411 (project MULTISENSOR) and H2020-RIA-645012 (project KRISTINA).
1603.01360#32
1603.01360#34
1603.01360
[ "1603.03793" ]
1603.01360#34
Neural Architectures for Named Entity Recognition
# References [Ando and Zhang2005a] Rie Kubota Ando and Tong Zhang. 2005a. A framework for learning predictive structures from multiple tasks and unlabeled data. The Journal of Machine Learning Research, 6:1817â 1853. [Ando and Zhang2005b] Rie Kubota Ando and Tong Zhang. 2005b. Learning predictive structures. JMLR, 6:1817â 1853. [Ballesteros et al.2015] Miguel Ballesteros, Chris Dyer, and Noah A. Smith. 2015.
1603.01360#33
1603.01360#35
1603.01360
[ "1603.03793" ]
1603.01360#35
Neural Architectures for Named Entity Recognition
Improved transition-based dependency parsing by modeling characters instead of words with LSTMs. In Proceedings of EMNLP. [Ballesteros et al.2016] Miguel Ballesteros, Yoav Gold- erg, Chris Dyer, and Noah A. Smith. 2016. Train- ing with Exploration Improves a Greedy Stack-LSTM Parser. In arXiv:1603.03793. [Bengio et al.1994] Yoshua Bengio, Patrice Simard, and Paolo Frasconi. 1994. Learning long-term depen- dencies with gradient descent is difï¬
1603.01360#34
1603.01360#36
1603.01360
[ "1603.03793" ]
1603.01360#36
Neural Architectures for Named Entity Recognition
cult. Neural Net- works, IEEE Transactions on, 5(2):157â 166. [Biemann et al.2007] Chris Biemann, Gerhard Heyer, Uwe Quasthoff, and Matthias Richter. 2007. The leipzig corpora collection-monolingual corpora of standard size. Proceedings of Corpus Linguistic. Callison-Burch, Philipp Koehn, Christof Monz, Kay Peterson, Mark Przybocki, and Omar F Zaidan. Findings of the 2010 joint workshop on statistical machine In translation and metrics for machine translation. Proceedings of the Joint Fifth Workshop on Statistical Machine Translation and MetricsMATR, pages 17â
1603.01360#35
1603.01360#37
1603.01360
[ "1603.03793" ]
1603.01360#37
Neural Architectures for Named Entity Recognition
53. Association for Computational Linguistics. [Carreras et al.2002] Xavier Carreras, Llu´ıs M`arquez, and Llu´ıs Padr´o. 2002. Named entity extraction using ad- aboost, proceedings of the 6th conference on natural language learning. August, 31:1â 4. [Chiu and Nichols2015] Jason PC Chiu and Eric Nichols. 2015.
1603.01360#36
1603.01360#38
1603.01360
[ "1603.03793" ]
1603.01360#38
Neural Architectures for Named Entity Recognition
Named entity recognition with bidirectional lstm-cnns. arXiv preprint arXiv:1511.08308. [Collobert et al.2011] Ronan Collobert, Jason Weston, L´eon Bottou, Michael Karlen, Koray Kavukcuoglu, and Pavel Kuksa. 2011. Natural language process- ing (almost) from scratch. The Journal of Machine Learning Research, 12:2493â 2537. [Cucerzan and Yarowsky1999] Silviu and Cucerzan David Yarowsky.
1603.01360#37
1603.01360#39
1603.01360
[ "1603.03793" ]
1603.01360#39
Neural Architectures for Named Entity Recognition
Language independent named entity recognition combining morphological and contextual evidence. In Proceedings of the 1999 Joint SIGDAT Conference on EMNLP and VLC, pages 90â 99. and David Yarowsky. 2002. Language independent ner using a uniï¬ ed model of internal and contextual In proceedings of the 6th conference on evidence. Natural language learning-Volume 20, pages 1â 4. Association for Computational Linguistics. [Dai et al.2015] Hong-Jie Dai, Po-Ting Lai, Yung-Chun Chang, and Richard Tzong-Han Tsai. 2015. Enhanc- ing of chemical compound and drug name recogni- tion using representative tag scheme and ï¬ ne-grained Journal of cheminformatics, 7(Suppl tokenization. 1):S14. [Dyer et al.2015] Chris Dyer, Miguel Ballesteros, Wang Ling, Austin Matthews, and Noah A. Smith. 2015.
1603.01360#38
1603.01360#40
1603.01360
[ "1603.03793" ]
1603.01360#40
Neural Architectures for Named Entity Recognition
Transition-based dependency parsing with stack long short-term memory. In Proc. ACL. [Eisenstein et al.2011] Jacob Eisenstein, Tae Yano, William W Cohen, Noah A Smith, and Eric P Xing. 2011. Structured databases of named entities from bayesian nonparametrics. In Proceedings of the First Workshop on Unsupervised Learning in NLP, pages 2â 12. Association for Computational Linguistics. Ittycheriah, [Florian et al.2003] Radu Florian, Abe 2003. Named Hongyan Jing, and Tong Zhang. entity recognition through classiï¬ er combination. In Proceedings of the seventh conference on Natural language learning at HLT-NAACL 2003-Volume 4, pages 168â 171. Association for Computational Linguistics. [Gillick et al.2015] Dan Gillick, Cliff Brunk, Oriol Vinyals, and Amarnag Subramanya. 2015. Multilin- gual language processing from bytes. arXiv preprint arXiv:1512.00103. [Graff2011] David Graff. 2011. Spanish gigaword third edition (ldc2011t12). Linguistic Data Consortium, Univer-sity of Pennsylvania, Philadelphia, PA. [Graves and Schmidhuber2005] Alex Graves and J¨urgen Schmidhuber. 2005. Framewise phoneme classiï¬ - In Proc. cation with bidirectional LSTM networks. IJCNN. [Hinton et al.2012] Geoffrey E Hinton, Nitish Srivas- tava, Alex Krizhevsky, Ilya Sutskever, and Ruslan R Salakhutdinov. 2012. Improving neural networks by preventing co-adaptation of feature detectors. arXiv preprint arXiv:1207.0580. [Hochreiter and Schmidhuber1997] Sepp Hochreiter and J¨urgen Schmidhuber. 1997. Long short-term memory. Neural Computation, 9(8):1735â
1603.01360#39
1603.01360#41
1603.01360
[ "1603.03793" ]
1603.01360#41
Neural Architectures for Named Entity Recognition
1780. [Hoffart et al.2011] Johannes Hoffart, Mohamed Amir Ilaria Bordino, Hagen F¨urstenau, Manfred Yosef, Pinkal, Marc Spaniol, Bilyana Taneva, Stefan Thater, and Gerhard Weikum. 2011. Robust disambiguation of named entities in text. In Proceedings of the Con- ference on Empirical Methods in Natural Language Processing, pages 782â 792. Association for Compu- tational Linguistics. [Huang et al.2015] Zhiheng Huang, Wei Xu, and Kai Yu. 2015. Bidirectional LSTM-CRF models for sequence tagging. CoRR, abs/1508.01991. [Kim et al.2015] Yoon Kim, Yacine Jernite, David Son- tag, and Alexander M. Rush. 2015.
1603.01360#40
1603.01360#42
1603.01360
[ "1603.03793" ]
1603.01360#42
Neural Architectures for Named Entity Recognition
Character-aware neural language models. CoRR, abs/1508.06615. [Kingma and Ba2014] Diederik Kingma and Jimmy Ba. 2014. Adam: A method for stochastic optimization. arXiv preprint arXiv:1412.6980. [Lafferty et al.2001] John Lafferty, Andrew McCallum, and Fernando CN Pereira. 2001. Conditional random ï¬ elds: Probabilistic models for segmenting and label- ing sequence data. In Proc. ICML. [Lin and Wu2009] Dekang Lin and Xiaoyun Wu. 2009. Phrase clustering for discriminative learning. In Pro- ceedings of the Joint Conference of the 47th Annual Meeting of the ACL and the 4th International Joint Conference on Natural Language Processing of the AFNLP:
1603.01360#41
1603.01360#43
1603.01360
[ "1603.03793" ]
1603.01360#43
Neural Architectures for Named Entity Recognition
Volume 2-Volume 2, pages 1030â 1038. As- sociation for Computational Linguistics. [Ling et al.2015a] Wang Ling, Lin Chu-Cheng, Yulia Tsvetkov, Silvio Amir, R´amon Fernandez Astudillo, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015a. Not all contexts are created equal: Better word representations with variable attention. In Proc. EMNLP. [Ling et al.2015b] Wang Ling, Tiago Lu´ıs, Lu´ıs Marujo, Ram´on Fernandez Astudillo, Silvio Amir, Chris Dyer, Alan W Black, and Isabel Trancoso. 2015b.
1603.01360#42
1603.01360#44
1603.01360
[ "1603.03793" ]
1603.01360#44
Neural Architectures for Named Entity Recognition
Finding function in form: Compositional character models for open vocabulary word representation. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP). [Luo et al.2015] Gang Luo, Xiaojiang Huang, Chin-Yew Lin, and Zaiqing Nie. 2015. Joint named entity recog- nition and disambiguation. In Proc. EMNLP. [Mikolov et al.2013a] Tomas Mikolov, Kai Chen, Greg Corrado, and Jeffrey Dean. 2013a.
1603.01360#43
1603.01360#45
1603.01360
[ "1603.03793" ]
1603.01360#45
Neural Architectures for Named Entity Recognition
Efï¬ cient estima- tion of word representations in vector space. arXiv preprint arXiv:1301.3781. Ilya Sutskever, Kai Chen, Greg S Corrado, and Jeff Dean. 2013b. Distributed representations of words and phrases and their compositionality. In Proc. NIPS. [Nivre2004] Joakim Nivre. 2004. Incrementality in de- In Proceedings of terministic dependency parsing. the Workshop on Incremental Parsing: Bringing En- gineering and Cognition Together. [Nothman et al.2013] Joel Nothman, Nicky Ringland, Will Radford, Tara Murphy, and James R Curran. 2013.
1603.01360#44
1603.01360#46
1603.01360
[ "1603.03793" ]
1603.01360#46
Neural Architectures for Named Entity Recognition
Learning multilingual named entity recognition from wikipedia. Artiï¬ cial Intelligence, 194:151â 175. [Parker et al.2009] Robert Parker, David Graff, Junbo Kong, Ke Chen, and Kazuaki Maeda. 2009. English gigaword fourth edition (ldc2009t13). Linguistic Data Consortium, Univer-sity of Pennsylvania, Philadel- phia, PA. [Passos et al.2014] Alexandre Passos, Vineet Kumar, and Andrew McCallum. 2014. Lexicon infused phrase arXiv embeddings for named entity resolution. preprint arXiv:1404.5367. [Qi et al.2009] Yanjun Qi, Ronan Collobert, Pavel Kuksa, Koray Kavukcuoglu, and Jason Weston. 2009. Com- bining labeled and unlabeled data with word-class dis- In Proceedings of the 18th ACM tribution learning. conference on Information and knowledge manage- ment, pages 1737â
1603.01360#45
1603.01360#47
1603.01360
[ "1603.03793" ]
1603.01360#47
Neural Architectures for Named Entity Recognition
1740. ACM. [Ratinov and Roth2009] Lev Ratinov and Dan Roth. 2009. Design challenges and misconceptions in named entity recognition. In Proceedings of the Thir- teenth Conference on Computational Natural Lan- guage Learning, pages 147â 155. Association for Computational Linguistics. [Santos and GuimarË aes2015] Cicero Nogueira dos Santos and Victor GuimarË aes. 2015. Boosting named entity recognition with neural character embeddings. arXiv preprint arXiv:1505.05008. [Tjong Kim Sang and De Meulder2003] Erik F. Tjong Kim Sang and Fien De Meulder. 2003.
1603.01360#46
1603.01360#48
1603.01360
[ "1603.03793" ]
1603.01360#48
Neural Architectures for Named Entity Recognition
Introduction to the conll-2003 shared task: Language-independent named entity recognition. In Proc. CoNLL. [Tjong Kim Sang2002] Erik F. Tjong Kim Sang. 2002. Introduction to the conll-2002 shared task: Language- In Proc. independent named entity recognition. CoNLL. and Yoshua Bengio. 2010. Word representations: A sim- ple and general method for semi-supervised learning. In Proc. ACL. [Zeiler2012] Matthew D Zeiler. An adaptive learning rate method. arXiv:1212.5701. 2012. Adadelta: arXiv preprint [Zhang and Clark2011] Yue Zhang and Stephen Clark. 2011. Syntactic processing using the generalized per- ceptron and beam search. Computational Linguistics, 37(1). [Zhang et al.2015] Xiang Zhang, Junbo Zhao, and Yann LeCun. 2015. Character-level convolutional networks for text classiï¬
1603.01360#47
1603.01360#49
1603.01360
[ "1603.03793" ]
1603.01360#49
Neural Architectures for Named Entity Recognition
cation. In Advances in Neural Informa- tion Processing Systems, pages 649â 657. [Zhou and Xu2015] Jie Zhou and Wei Xu. 2015. End-to- end learning of semantic role labeling using recurrent neural networks. In Proceedings of the Annual Meet- ing of the Association for Computational Linguistics.
1603.01360#48
1603.01360
[ "1603.03793" ]
1603.01025#0
Convolutional Neural Networks using Logarithmic Data Representation
6 1 0 2 r a M 7 1 ] E N . s c [ 2 v 5 2 0 1 0 . 3 0 6 1 : v i X r a # Convolutional Neural Networks using Logarithmic Data Representation # Daisuke Miyashita Stanford University, Stanford, CA 94305 USA Toshiba, Kawasaki, Japan [email protected] # Edward H. Lee Stanford University, Stanford, CA 94305 USA # [email protected] # Boris Murmann Stanford University, Stanford, CA 94305 USA # [email protected] # Abstract
1603.01025#1
1603.01025
[ "1510.03009" ]
1603.01025#1
Convolutional Neural Networks using Logarithmic Data Representation
Recent advances in convolutional neural net- works have considered model complexity and hardware efï¬ ciency to enable deployment onto For embedded systems and mobile devices. example, it is now well-known that the arith- metic operations of deep networks can be en- coded down to 8-bit ï¬ xed-point without signiï¬ - cant deterioration in performance. However, fur- ther reduction in precision down to as low as 3-bit ï¬ xed-point results in signiï¬ cant losses in performance. In this paper we propose a new data representation that enables state-of-the-art networks to be encoded to 3 bits with negligi- ble loss in classiï¬ cation performance. To per- form this, we take advantage of the fact that the weights and activations in a trained net- work naturally have non-uniform distributions. Using non-uniform, base-2 logarithmic repre- sentation to encode weights, communicate acti- vations, and perform dot-products enables net- works to 1) achieve higher classiï¬ cation accura- cies than ï¬ xed-point at the same resolution and 2) eliminate bulky digital multipliers. Finally, we propose an end-to-end training procedure that uses log representation at 5-bits, which achieves higher ï¬ nal test accuracy than linear at 5-bits.
1603.01025#0
1603.01025#2
1603.01025
[ "1510.03009" ]
1603.01025#2
Convolutional Neural Networks using Logarithmic Data Representation
# 1. Introduction Deep convolutional neural networks (CNN) have demon- strated state-of-the-art performance in image classiï¬ cation (Krizhevsky et al., 2012; Simonyan & Zisserman, 2014; He et al., 2015) but have steadily grown in computational complexity. For example, the Deep Residual Learning (He et al., 2015) set a new record in image classiï¬ cation accu- racy at the expense of 11.3 billion ï¬ oating-point multiply- and-add operations per forward-pass of an image and 230 MB of memory to store the weights in its 152-layer net- work. In order for these large networks to run in real-time ap- plications such as for mobile or embedded platforms, it is often necessary to use low-precision arithmetic and apply compression techniques. Recently, many researchers have successfully deployed networks that compute using 8-bit ï¬ xed-point representation (Vanhoucke et al., 2011; Abadi et al., 2015) and have successfully trained networks with 16-bit ï¬ xed point (Gupta et al., 2015). This work in par- ticular is built upon the idea that algorithm-level noise tol- erance of the network can motivate simpliï¬ cations in hard- ware complexity. Interesting directions point towards matrix factorization (Denton et al., 2014) and tensoriï¬ cation (Novikov et al., 2015) by leveraging structure of the fully-connected (FC) layers. Another promising area is to prune the FC layer be- fore mapping this to sparse matrix-matrix routines in GPUs (Han et al., 2015b).
1603.01025#1
1603.01025#3
1603.01025
[ "1510.03009" ]
1603.01025#3
Convolutional Neural Networks using Logarithmic Data Representation
However, many of these inventions aim at systems that meet some required and speciï¬ c crite- ria such as networks that have many, large FC layers or ac- celerators that handle efï¬ cient sparse matrix-matrix arith- metic. And with network architectures currently pushing towards increasing the depth of convolutional layers by set- tling for fewer dense FC layers (He et al., 2015; Szegedy et al., 2015), there are potential problems in motivating a one-size-ï¬ ts-all solution to handle these computational and memory demands. We propose a general method of representing and comput- Convolutional Neural Networks using Logarithmic Data Representation ing the dot products in a network that can allow networks with minimal constraint on the layer properties to run more efï¬ ciently in digital hardware. In this paper we explore the use of communicating activations, storing weights, and computing the atomic dot-products in the binary logarith- mic (base-2 logarithmic) domain for both inference and training. The motivations for moving to this domain are the following:
1603.01025#2
1603.01025#4
1603.01025
[ "1510.03009" ]
1603.01025#4
Convolutional Neural Networks using Logarithmic Data Representation
â ¢ Training networks with weight decay leads to ï¬ nal weights that are distributed non-uniformly around 0. â ¢ Similarly, activations are also highly concentrated near 0. Our work uses rectiï¬ ed Linear Units (ReLU) as the non-linearity. â ¢ Logarithmic representations can encode data with very large dynamic range in fewer bits than can ï¬ xed- point representation (Gautschi et al., 2016). â ¢ Data representation in log-domain is naturally en- coded in digital hardware (as shown in Section 4.3). encoded to as little as 5 bits without a signiï¬
1603.01025#3
1603.01025#5
1603.01025
[ "1510.03009" ]
1603.01025#5
Convolutional Neural Networks using Logarithmic Data Representation
cant accuracy penalty. There has also been recent work in training us- ing low precision arithmetic. (Gupta et al., 2015) propose a stochastic rounding scheme to help train networks using 16-bit ï¬ xed-point. (Lin et al., 2015) propose quantized back-propagation and ternary connect. This method re- duces the number of ï¬ oating-point multiplications by cast- ing these operations into powers-of-two multiplies, which are easily realized with bitshifts in digital hardware. They apply this technique on MNIST and CIFAR10 with lit- tle loss in performance. However, their method does not completely eliminate all multiplications end-to-end. Dur- ing test-time the network uses the learned full resolution weights for forward propagation. Training with reduced precision is motivated by the idea that high-precision gra- dient updates is unnecessary for the stochastic optimization of networks (Bottou & Bousquet, 2007; Bishop, 1995; Au- dhkhasi et al., 2013). In fact, there are some studies that show that gradient noise helps convergence. For example, (Neelakantan et al., 2015) empirically ï¬ nds that gradient noise can also encourage faster exploration and annealing of optimization space, which can help network generaliza- tion performance.
1603.01025#4
1603.01025#6
1603.01025
[ "1510.03009" ]
1603.01025#6
Convolutional Neural Networks using Logarithmic Data Representation
Our contributions are listed: â ¢ we show that networks obtain higher classiï¬ cation accuracies with logarithmic quantization than linear quantization using traditional ï¬ xed-point at equivalent resolutions. â ¢ we show that activations are more robust to quantiza- tion than weights. This is because the number of ac- tivations tend to be larger than the number of weights which are reused during convolutions. â ¢ we apply our logarithmic data representation on state- of-the-art networks, allowing activations and weights to use only 3b with almost no loss in classiï¬ cation performance. Hardware implementations.
1603.01025#5
1603.01025#7
1603.01025
[ "1510.03009" ]