c-aero commited on
Commit
ed0960a
·
1 Parent(s): 555a898

Delete train_example1.jsonl

Browse files
Files changed (1) hide show
  1. train_example1.jsonl +0 -4
train_example1.jsonl DELETED
@@ -1,4 +0,0 @@
1
- {"doi": "1102.0183", "chunk-id": "0", "chunk": "High-Performance Neural Networks\nfor Visual Object Classi\fcation\nDan C. Cire\u0018 san, Ueli Meier, Jonathan Masci,\nLuca M. Gambardella and J\u007f urgen Schmidhuber\nTechnical Report No. IDSIA-01-11\nJanuary 2011\nIDSIA / USI-SUPSI\nDalle Molle Institute for Arti\fcial Intelligence\nGalleria 2, 6928 Manno, Switzerland\nIDSIA is a joint institute of both University of Lugano (USI) and University of Applied Sciences of Southern Switzerland (SUPSI),\nand was founded in 1988 by the Dalle Molle Foundation which promoted quality of life.\nThis work was partially supported by the Swiss Commission for Technology and Innovation (CTI), Project n. 9688.1 IFF:\nIntelligent Fill in Form.arXiv:1102.0183v1 [cs.AI] 1 Feb 2011\nTechnical Report No. IDSIA-01-11 1\nHigh-Performance Neural Networks\nfor Visual Object Classi\fcation\nDan C. Cire\u0018 san, Ueli Meier, Jonathan Masci,\nLuca M. Gambardella and J\u007f urgen Schmidhuber\nJanuary 2011\nAbstract\nWe present a fast, fully parameterizable GPU implementation of Convolutional Neural\nNetwork variants. Our feature extractors are neither carefully designed nor pre-wired, but", "id": "1102.0183", "title": "High-Performance Neural Networks for Visual Object Classification", "summary": "We present a fast, fully parameterizable GPU implementation of Convolutional\nNeural Network variants. Our feature extractors are neither carefully designed\nnor pre-wired, but rather learned in a supervised way. Our deep hierarchical\narchitectures achieve the best published results on benchmarks for object\nclassification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with\nerror rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple\nback-propagation perform better than more shallow ones. Learning is\nsurprisingly rapid. NORB is completely trained within five epochs. Test error\nrates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs,\nrespectively.", "source": "http://arxiv.org/pdf/1102.0183", "authors": ["Dan C. Cire\u015fan", "Ueli Meier", "Jonathan Masci", "Luca M. Gambardella", "J\u00fcrgen Schmidhuber"], "categories": ["cs.AI", "cs.NE"], "comment": "12 pages, 2 figures, 5 tables", "journal_ref": null, "primary_category": "cs.AI", "published": "20110201", "updated": "20110201", "references": []}
2
- {"doi": "1102.0183", "chunk-id": "1", "chunk": "January 2011\nAbstract\nWe present a fast, fully parameterizable GPU implementation of Convolutional Neural\nNetwork variants. Our feature extractors are neither carefully designed nor pre-wired, but\nrather learned in a supervised way. Our deep hierarchical architectures achieve the best\npublished results on benchmarks for object classi\fcation (NORB, CIFAR10) and handwritten\ndigit recognition (MNIST), with error rates of 2.53%, 19.51%, 0.35%, respectively. Deep\nnets trained by simple back-propagation perform better than more shallow ones. Learning\nis surprisingly rapid. NORB is completely trained within \fve epochs. Test error rates on\nMNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs, respectively.\n1 Introduction\nThe human visual system e\u000eciently recognizes and localizes objects within cluttered scenes. For\narti\fcial systems, however, this is still di\u000ecult, due to viewpoint-dependent object variability,\nand the high in-class variability of many object types. Deep hierarchical neural models roughly\nmimick the nature of mammalian visual cortex, and by community consensus are among the most\npromising architectures for such tasks. The most successful hierarchical object recognition systems\nall extract localized features from input images, convolving image patches with \flters. Filter", "id": "1102.0183", "title": "High-Performance Neural Networks for Visual Object Classification", "summary": "We present a fast, fully parameterizable GPU implementation of Convolutional\nNeural Network variants. Our feature extractors are neither carefully designed\nnor pre-wired, but rather learned in a supervised way. Our deep hierarchical\narchitectures achieve the best published results on benchmarks for object\nclassification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with\nerror rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple\nback-propagation perform better than more shallow ones. Learning is\nsurprisingly rapid. NORB is completely trained within five epochs. Test error\nrates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs,\nrespectively.", "source": "http://arxiv.org/pdf/1102.0183", "authors": ["Dan C. Cire\u015fan", "Ueli Meier", "Jonathan Masci", "Luca M. Gambardella", "J\u00fcrgen Schmidhuber"], "categories": ["cs.AI", "cs.NE"], "comment": "12 pages, 2 figures, 5 tables", "journal_ref": null, "primary_category": "cs.AI", "published": "20110201", "updated": "20110201", "references": []}
3
- {"doi": "1102.0183", "chunk-id": "2", "chunk": "promising architectures for such tasks. The most successful hierarchical object recognition systems\nall extract localized features from input images, convolving image patches with \flters. Filter\nresponses are then repeatedly sub-sampled and re-\fltered, resulting in a deep feed-forward network\narchitecture whose output feature vectors are eventually classi\fed. One of the \frst hierarchical\nneural systems was the Neocognitron (Fukushima, 1980) which inspired many of the more recent\nvariants.\nUnsupervised learning methods applied to patches of natural images tend to produce localized\n\flters that resemble o\u000b-center-on-surround \flters, orientation-sensitive bar detectors, Gabor \flters\n(Schmidhuber et al. , 1996; Olshausen and Field, 1997; Hoyer and Hyv\u007f arinen, 2000). These \fndings\nin conjunction with experimental studies of the visual cortex justify the use of such \flters in the\nso-called standard model for object recognition (Riesenhuber and Poggio, 1999; Serre et al. , 2007;\nMutch and Lowe, 2008), whose \flters are \fxed, in contrast to those of Convolutional Neural", "id": "1102.0183", "title": "High-Performance Neural Networks for Visual Object Classification", "summary": "We present a fast, fully parameterizable GPU implementation of Convolutional\nNeural Network variants. Our feature extractors are neither carefully designed\nnor pre-wired, but rather learned in a supervised way. Our deep hierarchical\narchitectures achieve the best published results on benchmarks for object\nclassification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with\nerror rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple\nback-propagation perform better than more shallow ones. Learning is\nsurprisingly rapid. NORB is completely trained within five epochs. Test error\nrates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs,\nrespectively.", "source": "http://arxiv.org/pdf/1102.0183", "authors": ["Dan C. Cire\u015fan", "Ueli Meier", "Jonathan Masci", "Luca M. Gambardella", "J\u00fcrgen Schmidhuber"], "categories": ["cs.AI", "cs.NE"], "comment": "12 pages, 2 figures, 5 tables", "journal_ref": null, "primary_category": "cs.AI", "published": "20110201", "updated": "20110201", "references": []}
4
- {"doi": "1102.0183", "chunk-id": "3", "chunk": "Mutch and Lowe, 2008), whose \flters are \fxed, in contrast to those of Convolutional Neural\nNetworks (CNNs) (LeCun et al. , 1998; Behnke, 2003; Simard et al. , 2003), whose weights (\flters)\nare randomly initialized and changed in a supervised way using back-propagation (BP).\nDespite the hardware progress of the past decades, computational speed is still a limiting\nfactor for CNN architectures characterized by many building blocks typically set by trial and\nerror. To systematically test the impact of various architectures on classi\fcation performance,\nwe present a fast CNN implementation on Graphics Processing Units (GPUs). Previous GPU\nimplementations of CNNs (Chellapilla et al. , 2006; Uetz and Behnke, 2009) were hard-coded to\nsatisfy GPU hardware constraints, whereas our implementation is \rexible and fully online (i.e.,\nTechnical Report No. IDSIA-01-11 2\nweight updates after each image). It allows for training large CNNs within days instead of months,\nsuch that we can investigate the in\ruence of various structural parameters by exploring large\nparameter spaces (Pinto et al. , 2009) and performing error analysis on repeated experiments.\nWe evaluate various networks on the handwritten digit benchmark MNIST (LeCun et al. , 1998)", "id": "1102.0183", "title": "High-Performance Neural Networks for Visual Object Classification", "summary": "We present a fast, fully parameterizable GPU implementation of Convolutional\nNeural Network variants. Our feature extractors are neither carefully designed\nnor pre-wired, but rather learned in a supervised way. Our deep hierarchical\narchitectures achieve the best published results on benchmarks for object\nclassification (NORB, CIFAR10) and handwritten digit recognition (MNIST), with\nerror rates of 2.53%, 19.51%, 0.35%, respectively. Deep nets trained by simple\nback-propagation perform better than more shallow ones. Learning is\nsurprisingly rapid. NORB is completely trained within five epochs. Test error\nrates on MNIST drop to 2.42%, 0.97% and 0.48% after 1, 3 and 17 epochs,\nrespectively.", "source": "http://arxiv.org/pdf/1102.0183", "authors": ["Dan C. Cire\u015fan", "Ueli Meier", "Jonathan Masci", "Luca M. Gambardella", "J\u00fcrgen Schmidhuber"], "categories": ["cs.AI", "cs.NE"], "comment": "12 pages, 2 figures, 5 tables", "journal_ref": null, "primary_category": "cs.AI", "published": "20110201", "updated": "20110201", "references": []}