Hande Celikkanat
commited on
Automated update through cc-citations github repo
Browse files- 2016.jsonl +4 -4
- 2017.jsonl +6 -6
- 2018.jsonl +7 -7
- 2019.jsonl +3 -3
- 2020.jsonl +10 -10
- 2021.jsonl +8 -8
- 2022.jsonl +7 -7
- 2023.jsonl +10 -10
- 2024.jsonl +0 -0
- 2025.jsonl +0 -0
2016.jsonl
CHANGED
|
@@ -1,7 +1,7 @@
|
|
| 1 |
-
{"year":"2016","title":"A Case Study of Complex Graph Analysis in Distributed Memory: Implementation and Optimization","authors":["GM Slota, S Rajamanickam, K Madduri"],"snippet":"... Focusing on one of the largest publicly-available hyperlink graphs (the 2012 Web Data Commons graph1, which was in- turn extracted from the open Common Crawl web corpus2), we develop parallel
|
| 2 |
{"year":"2016","title":"A Convolutional Encoder Model for Neural Machine Translation","authors":["J Gehring, M Auli, D Grangier, YN Dauphin - arXiv preprint arXiv:1611.02344, 2016"],"snippet":"... WMT'15 English-German. We use all available parallel training data, namely Europarl v7, Common Crawl and News Commentary v10 and apply the standard Moses tokenization to obtain 3.9M sentence pairs (Koehn et al., 2007). We report results on newstest2015. ...","url":["https://arxiv.org/pdf/1611.02344"]}
|
| 3 |
{"year":"2016","title":"A Deep Fusion Model for Domain Adaptation in Phrase-based MT","authors":["N Durrani, S Joty, A Abdelali, H Sajjad"],"snippet":"... test-13 993 18K 17K test-13 1169 26K 28K Table 1: Statistics of the English-German and Arabic-English training corpora in terms of Sentences and Tokens (represented in millions). ep = Europarl, cc = Common Crawl, un = United Nations ...","url":["https://www.aclweb.org/anthology/C/C16/C16-1299.pdf"]}
|
| 4 |
-
{"year":"2016","title":"A Large DataBase of Hypernymy Relations Extracted from the Web","authors":["J Seitner, C Bizer, K Eckert, S Faralli, R Meusel… - … of the 10th edition of the …, 2016"],"snippet":"...
|
| 5 |
{"year":"2016","title":"A Maturity Model for Public Administration as Open Translation Data Providers","authors":["N Bel, ML Forcada, A Gómez-Pérez - arXiv preprint arXiv:1607.01990, 2016"],"snippet":"... There are techniques to mitigate the need of large quantities of parallel text, but most often at the expense of resulting translation quality. As a reference of the magnitude we can take as a standard corpus the Common Crawl corpus (Smith et al. ...","url":["http://arxiv.org/pdf/1607.01990"]}
|
| 6 |
{"year":"2016","title":"A Neural Architecture Mimicking Humans End-to-End for Natural Language Inference","authors":["B Paria, KM Annervaz, A Dukkipati, A Chatterjee… - arXiv preprint arXiv: …, 2016"],"snippet":"... We used batch normalization [Ioffe and Szegedy, 2015] while training. The various model parameters used are mentioned in Table I. We experimented with both GloVe vectors trained1 on Common Crawl dataset as well as Word2Vec vector trained2 on Google news dataset. ...","url":["https://arxiv.org/pdf/1611.04741"]}
|
| 7 |
{"year":"2016","title":"A practical guide to big data research in psychology.","authors":["EE Chen, SP Wojcik - Psychological Methods, 2016"],"snippet":"... as well as general collections, such as Amazon Web Services' Public Data Sets repository (AWS, nd, http://aws.amazon.com/public-data-sets/) which includes the 1000 Genomes Project, with full genomic sequences for 1,700 individuals, and the Common Crawl Corpus, with ...","url":["http://psycnet.apa.org/journals/met/21/4/458/"]}
|
|
@@ -106,7 +106,7 @@
|
|
| 106 |
{"year":"2016","title":"Lurking Malice in the Cloud: Understanding and Detecting Cloud Repository as a Malicious Service","authors":["X Liao, S Alrwais, K Yuan, L Xing, XF Wang, S Hao… - Proceedings of the 2016 …, 2016"],"snippet":"... Running the scanner over all the data collected by the Common Crawl [?], which indexed five billion web pages, for those associated with all major cloud storage providers (including Amazon S3, Cloudfront, Google Drive, etc.), we found around 1 million sites utilizing 6,885 ...","url":["http://dl.acm.org/citation.cfm?id=2978349"]}
|
| 107 |
{"year":"2016","title":"Machine Translation Quality and Post-Editor Productivity","authors":["M Sanchez-Torron, P Koehn - AMTA 2016, Vol., 2016"],"snippet":"... corresponding Spanish human reference translations. We trained nine MT systems with training data from the European Parliament proceedings, News Commentary, Common Crawl, and United Nations. The systems are phrase ...","url":["https://www.researchgate.net/profile/John_Ortega3/publication/309765044_Fuzzy-match_repair_using_black-box_machine_translation_systems_what_can_be_expected/links/5822496f08ae7ea5be6af317.pdf#page=22"]}
|
| 108 |
{"year":"2016","title":"Machine Translation Through Learning From a Communication Game","authors":["D He, Y Xia, T Qin, L Wang, N Yu, T Liu, WY Ma - Advances In Neural Information …, 2016"],"snippet":"... In detail, we used the same bilingual corpora from WMT'14 as used in [1, 5], which contains 12M sentence pairs extracting from five datasets: Europarl v7, Common Crawl corpus, UN corpus, News Commentary, and 109French-English corpus. ...","url":["http://papers.nips.cc/paper/6468-machine-translation-through-learning-from-a-communication-game.pdf"]}
|
| 109 |
-
{"year":"2016","title":"Measuring semantic similarity of words using concept networks","authors":["G Recski, E Iklódi, K Pajkossy, A Kornai"],"snippet":"... We extend this set of models with GloVe vectors4 (Pennington et al., 2014), trained on 840 billion tokens of Common Crawl data5, and
|
| 110 |
{"year":"2016","title":"Models and Inference for Prefix-Constrained Machine Translation","authors":["J Wuebker, S Green, J DeNero, S Hasan, MT Luong"],"snippet":"... The English-French bilingual training data consists of 4.9M sentence pairs from the Common Crawl and Europarl corpora from WMT 2015 (Bo- jar et al., 2015). The LM was estimated from the target side of the bitext. For English-German we run large-scale experiments. ...","url":["http://nlp.stanford.edu/pubs/wuebker2016acl_prefix.pdf"]}
|
| 111 |
{"year":"2016","title":"Multi-cultural Wikipedia mining of geopolitics interactions leveraging reduced Google matrix analysis","authors":["KM Frahm, SE Zant, K Jaffrès-Runser… - arXiv preprint arXiv: …, 2016"],"snippet":"... At present directed networks of real systems can be very large (about 4.2 million articles for the English Wikipedia edition in 2013 [13] or 3.5 billion web pages for a publicly ac- cessible web crawl that was gathered by the Common Crawl Foundation in 2012 [18]). ...","url":["https://arxiv.org/pdf/1612.07920"]}
|
| 112 |
{"year":"2016","title":"Multi-Perspective Context Matching for Machine Comprehension","authors":["Z Wang, H Mi, W Hamza, R Florian - arXiv preprint arXiv:1612.04211, 2016"],"snippet":"... jpurkar et al., 2016). To initialize the word embeddings in the word representation layer, we use the 300-dimensional GloVe word vectors pre-trained from the 840B Common Crawl corpus (Pennington et al., 2014). For the out ...","url":["https://arxiv.org/pdf/1612.04211"]}
|
|
@@ -167,7 +167,7 @@
|
|
| 167 |
{"year":"2016","title":"The AFRL-MITLL WMT16 News-Translation Task Systems","authors":["J Gwinnup, T Anderson, G Erdmann, K Young, M Kazi… - Proceedings of the First …, 2016"],"snippet":"... to build a monolithic language model from the following sources: Yandex4, Commoncrawl (Smith et al., 2013), LDC Gigaword English v5 (Parker et al., 2011) and News Commentary. Submission system 1 included the data selected from the large Commoncrawl corpus as ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2313.pdf"]}
|
| 168 |
{"year":"2016","title":"The CogALex-V Shared Task on the Corpus-Based Identification of Semantic Relations","authors":["E Santus, A Gladkova, S Evert, A Lenci - COLING 2016, 2016"],"snippet":"... Team Method (s) Corpus size Corpus GHHH Word analogies, linear regression and multi-task CNN 100B 6B 840B Google News (pre-trained word2vec embeddings, 300 dim.); Wikipedia+ Gigaword 5 (pre-trained GloVe embeddings, 300 dim.), Common Crawl (pre-trained ...","url":["https://sites.google.com/site/cogalex2016/home/accepted-papers/CogALex-V_Proceedings.pdf#page=83"]}
|
| 169 |
{"year":"2016","title":"The Edinburgh/LMU Hierarchical Machine Translation System for WMT 2016","authors":["M Huck, A Fraser, B Haddow - Proc. of the ACL 2016 First Conf. on Machine …, 2016"],"snippet":"... CommonCrawl LM training data in background LM ... Utilizing a larger amount of target-side monolingual resources by appending the CommonCrawl corpus to the background LM's training data is very beneficial and increases the BLEU scores by around one point. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2315.pdf"]}
|
| 170 |
-
{"year":"2016","title":"The Edit Distance Transducer in Action: The University of Cambridge English-German System at WMT16","authors":["F Stahlberg, E Hasler, B Byrne - arXiv preprint arXiv:1606.04963, 2016","FSEHB Byrne"],"snippet":"
|
| 171 |
{"year":"2016","title":"The ILSP/ARC submission to the WMT 2016 Bilingual Document Alignment Shared Task","authors":["V Papavassiliou, P Prokopidis, S Piperidis - Proceedings of the First Conference on …, 2016"],"snippet":"... 1http://commoncrawl.org/ 2http://nlp.ilsp.gr/redmine/ilsp-fc/ 3Including modules for metadata extraction, language identification, boilerplate removal, document clean-up, text classification and sentence alignment 733 ... Dirt cheap web-scale parallel text from the common crawl...","url":["http://www.aclweb.org/anthology/W/W16/W16-2375.pdf"]}
|
| 172 |
{"year":"2016","title":"The JHU Machine Translation Systems for WMT 2016","authors":["S Ding, K Duh, H Khayrallah, P Koehn, M Post - … of the First Conference on Machine …, 2016"],"snippet":"... In addition, we included a large language model based on the CommonCrawl monolingual data ... of the language model trained on the monomlingual corpora extracted from Common Crawl... year, large corpora of monolingual data were extracted from Common Crawl (Buck et ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2310.pdf"]}
|
| 173 |
{"year":"2016","title":"The Karlsruhe Institute of Technology Systems for the News Translation Task in WMT 2016","authors":["TL Ha, E Cho, J Niehues, M Mediani, M Sperber… - Proceedings of the First …, 2016"],"snippet":"... To im- prove the quality of the Common Crawl corpus be- ing used in training, we filtered out noisy sentence pairs using an SVM classifier as described in (Me- diani et al., 2011). All of our translation systems are basically phrase-based. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2314.pdf"]}
|
|
|
|
| 1 |
+
{"year":"2016","title":"A Case Study of Complex Graph Analysis in Distributed Memory: Implementation and Optimization","authors":["GM Slota, S Rajamanickam, K Madduri"],"snippet":"... Focusing on one of the largest publicly-available hyperlink graphs (the 2012 Web Data Commons graph1, which was in- turn extracted from the open Common Crawl web corpus2), we develop parallel implementations for the Blue Waters supercomputer, one of the world's most ...","url":["http://www.personal.psu.edu/users/g/m/gms5016/pub/Dist-IPDPS16.pdf"]}
|
| 2 |
{"year":"2016","title":"A Convolutional Encoder Model for Neural Machine Translation","authors":["J Gehring, M Auli, D Grangier, YN Dauphin - arXiv preprint arXiv:1611.02344, 2016"],"snippet":"... WMT'15 English-German. We use all available parallel training data, namely Europarl v7, Common Crawl and News Commentary v10 and apply the standard Moses tokenization to obtain 3.9M sentence pairs (Koehn et al., 2007). We report results on newstest2015. ...","url":["https://arxiv.org/pdf/1611.02344"]}
|
| 3 |
{"year":"2016","title":"A Deep Fusion Model for Domain Adaptation in Phrase-based MT","authors":["N Durrani, S Joty, A Abdelali, H Sajjad"],"snippet":"... test-13 993 18K 17K test-13 1169 26K 28K Table 1: Statistics of the English-German and Arabic-English training corpora in terms of Sentences and Tokens (represented in millions). ep = Europarl, cc = Common Crawl, un = United Nations ...","url":["https://www.aclweb.org/anthology/C/C16/C16-1299.pdf"]}
|
| 4 |
+
{"year":"2016","title":"A Large DataBase of Hypernymy Relations Extracted from the Web","authors":["J Seitner, C Bizer, K Eckert, S Faralli, R Meusel… - … of the 10th edition of the …, 2016"],"snippet":"... The corpus is provided by the Common Crawl Foundation on AWS S3 as free download.6 The extraction of the tuples took around 2, 200 computing hours and was realized using 100 servers in parallel in less than 24 hours. ...","url":["http://webdatacommons.org/isadb/lrec2016.pdf"]}
|
| 5 |
{"year":"2016","title":"A Maturity Model for Public Administration as Open Translation Data Providers","authors":["N Bel, ML Forcada, A Gómez-Pérez - arXiv preprint arXiv:1607.01990, 2016"],"snippet":"... There are techniques to mitigate the need of large quantities of parallel text, but most often at the expense of resulting translation quality. As a reference of the magnitude we can take as a standard corpus the Common Crawl corpus (Smith et al. ...","url":["http://arxiv.org/pdf/1607.01990"]}
|
| 6 |
{"year":"2016","title":"A Neural Architecture Mimicking Humans End-to-End for Natural Language Inference","authors":["B Paria, KM Annervaz, A Dukkipati, A Chatterjee… - arXiv preprint arXiv: …, 2016"],"snippet":"... We used batch normalization [Ioffe and Szegedy, 2015] while training. The various model parameters used are mentioned in Table I. We experimented with both GloVe vectors trained1 on Common Crawl dataset as well as Word2Vec vector trained2 on Google news dataset. ...","url":["https://arxiv.org/pdf/1611.04741"]}
|
| 7 |
{"year":"2016","title":"A practical guide to big data research in psychology.","authors":["EE Chen, SP Wojcik - Psychological Methods, 2016"],"snippet":"... as well as general collections, such as Amazon Web Services' Public Data Sets repository (AWS, nd, http://aws.amazon.com/public-data-sets/) which includes the 1000 Genomes Project, with full genomic sequences for 1,700 individuals, and the Common Crawl Corpus, with ...","url":["http://psycnet.apa.org/journals/met/21/4/458/"]}
|
|
|
|
| 106 |
{"year":"2016","title":"Lurking Malice in the Cloud: Understanding and Detecting Cloud Repository as a Malicious Service","authors":["X Liao, S Alrwais, K Yuan, L Xing, XF Wang, S Hao… - Proceedings of the 2016 …, 2016"],"snippet":"... Running the scanner over all the data collected by the Common Crawl [?], which indexed five billion web pages, for those associated with all major cloud storage providers (including Amazon S3, Cloudfront, Google Drive, etc.), we found around 1 million sites utilizing 6,885 ...","url":["http://dl.acm.org/citation.cfm?id=2978349"]}
|
| 107 |
{"year":"2016","title":"Machine Translation Quality and Post-Editor Productivity","authors":["M Sanchez-Torron, P Koehn - AMTA 2016, Vol., 2016"],"snippet":"... corresponding Spanish human reference translations. We trained nine MT systems with training data from the European Parliament proceedings, News Commentary, Common Crawl, and United Nations. The systems are phrase ...","url":["https://www.researchgate.net/profile/John_Ortega3/publication/309765044_Fuzzy-match_repair_using_black-box_machine_translation_systems_what_can_be_expected/links/5822496f08ae7ea5be6af317.pdf#page=22"]}
|
| 108 |
{"year":"2016","title":"Machine Translation Through Learning From a Communication Game","authors":["D He, Y Xia, T Qin, L Wang, N Yu, T Liu, WY Ma - Advances In Neural Information …, 2016"],"snippet":"... In detail, we used the same bilingual corpora from WMT'14 as used in [1, 5], which contains 12M sentence pairs extracting from five datasets: Europarl v7, Common Crawl corpus, UN corpus, News Commentary, and 109French-English corpus. ...","url":["http://papers.nips.cc/paper/6468-machine-translation-through-learning-from-a-communication-game.pdf"]}
|
| 109 |
+
{"year":"2016","title":"Measuring semantic similarity of words using concept networks","authors":["G Recski, E Iklódi, K Pajkossy, A Kornai"],"snippet":"... We extend this set of models with GloVe vectors4 (Pennington et al., 2014), trained on 840 billion tokens of Common Crawl data5, and ... 2http://www.socher.org 3https://code.google.com/archive/ p/ word2vec/ 4http://nlp.stanford.edu/projects/ glove/ 5https://commoncrawl.org/ 6http ...","url":["http://www.kornai.com/Papers/wordsim.pdf"]}
|
| 110 |
{"year":"2016","title":"Models and Inference for Prefix-Constrained Machine Translation","authors":["J Wuebker, S Green, J DeNero, S Hasan, MT Luong"],"snippet":"... The English-French bilingual training data consists of 4.9M sentence pairs from the Common Crawl and Europarl corpora from WMT 2015 (Bo- jar et al., 2015). The LM was estimated from the target side of the bitext. For English-German we run large-scale experiments. ...","url":["http://nlp.stanford.edu/pubs/wuebker2016acl_prefix.pdf"]}
|
| 111 |
{"year":"2016","title":"Multi-cultural Wikipedia mining of geopolitics interactions leveraging reduced Google matrix analysis","authors":["KM Frahm, SE Zant, K Jaffrès-Runser… - arXiv preprint arXiv: …, 2016"],"snippet":"... At present directed networks of real systems can be very large (about 4.2 million articles for the English Wikipedia edition in 2013 [13] or 3.5 billion web pages for a publicly ac- cessible web crawl that was gathered by the Common Crawl Foundation in 2012 [18]). ...","url":["https://arxiv.org/pdf/1612.07920"]}
|
| 112 |
{"year":"2016","title":"Multi-Perspective Context Matching for Machine Comprehension","authors":["Z Wang, H Mi, W Hamza, R Florian - arXiv preprint arXiv:1612.04211, 2016"],"snippet":"... jpurkar et al., 2016). To initialize the word embeddings in the word representation layer, we use the 300-dimensional GloVe word vectors pre-trained from the 840B Common Crawl corpus (Pennington et al., 2014). For the out ...","url":["https://arxiv.org/pdf/1612.04211"]}
|
|
|
|
| 167 |
{"year":"2016","title":"The AFRL-MITLL WMT16 News-Translation Task Systems","authors":["J Gwinnup, T Anderson, G Erdmann, K Young, M Kazi… - Proceedings of the First …, 2016"],"snippet":"... to build a monolithic language model from the following sources: Yandex4, Commoncrawl (Smith et al., 2013), LDC Gigaword English v5 (Parker et al., 2011) and News Commentary. Submission system 1 included the data selected from the large Commoncrawl corpus as ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2313.pdf"]}
|
| 168 |
{"year":"2016","title":"The CogALex-V Shared Task on the Corpus-Based Identification of Semantic Relations","authors":["E Santus, A Gladkova, S Evert, A Lenci - COLING 2016, 2016"],"snippet":"... Team Method (s) Corpus size Corpus GHHH Word analogies, linear regression and multi-task CNN 100B 6B 840B Google News (pre-trained word2vec embeddings, 300 dim.); Wikipedia+ Gigaword 5 (pre-trained GloVe embeddings, 300 dim.), Common Crawl (pre-trained ...","url":["https://sites.google.com/site/cogalex2016/home/accepted-papers/CogALex-V_Proceedings.pdf#page=83"]}
|
| 169 |
{"year":"2016","title":"The Edinburgh/LMU Hierarchical Machine Translation System for WMT 2016","authors":["M Huck, A Fraser, B Haddow - Proc. of the ACL 2016 First Conf. on Machine …, 2016"],"snippet":"... CommonCrawl LM training data in background LM ... Utilizing a larger amount of target-side monolingual resources by appending the CommonCrawl corpus to the background LM's training data is very beneficial and increases the BLEU scores by around one point. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2315.pdf"]}
|
| 170 |
+
{"year":"2016","title":"The Edit Distance Transducer in Action: The University of Cambridge English-German System at WMT16","authors":["F Stahlberg, E Hasler, B Byrne - arXiv preprint arXiv:1606.04963, 2016","FSEHB Byrne"],"snippet":"... The parallel training data includes Europarl v7, Common Crawl, and News Commentary v10. Sentence pairs with sentences longer than 80 words or length ratios exceeding 2.4:1 were deleted, as were Common Crawl sentences from other languages (Shuyo, 2010). ...","url":["http://arxiv.org/pdf/1606.04963","https://ar5iv.labs.arxiv.org/html/1606.04963"]}
|
| 171 |
{"year":"2016","title":"The ILSP/ARC submission to the WMT 2016 Bilingual Document Alignment Shared Task","authors":["V Papavassiliou, P Prokopidis, S Piperidis - Proceedings of the First Conference on …, 2016"],"snippet":"... 1http://commoncrawl.org/ 2http://nlp.ilsp.gr/redmine/ilsp-fc/ 3Including modules for metadata extraction, language identification, boilerplate removal, document clean-up, text classification and sentence alignment 733 ... Dirt cheap web-scale parallel text from the common crawl...","url":["http://www.aclweb.org/anthology/W/W16/W16-2375.pdf"]}
|
| 172 |
{"year":"2016","title":"The JHU Machine Translation Systems for WMT 2016","authors":["S Ding, K Duh, H Khayrallah, P Koehn, M Post - … of the First Conference on Machine …, 2016"],"snippet":"... In addition, we included a large language model based on the CommonCrawl monolingual data ... of the language model trained on the monomlingual corpora extracted from Common Crawl... year, large corpora of monolingual data were extracted from Common Crawl (Buck et ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2310.pdf"]}
|
| 173 |
{"year":"2016","title":"The Karlsruhe Institute of Technology Systems for the News Translation Task in WMT 2016","authors":["TL Ha, E Cho, J Niehues, M Mediani, M Sperber… - Proceedings of the First …, 2016"],"snippet":"... To im- prove the quality of the Common Crawl corpus be- ing used in training, we filtered out noisy sentence pairs using an SVM classifier as described in (Me- diani et al., 2011). All of our translation systems are basically phrase-based. ...","url":["http://www.aclweb.org/anthology/W/W16/W16-2314.pdf"]}
|
2017.jsonl
CHANGED
|
@@ -19,7 +19,7 @@
|
|
| 19 |
{"year":"2017","title":"A Web Corpus for eCare: Collection, Annotation and Learning-Preliminary Results-DRAFT: 20 March 2017","authors":["M Santini, M Alirezai, M Nyström, A Jönsson"],"snippet":"... web, neither within the ”web as a corpus” experience, nor within the ”wacky” initiative, nor with Common Crawl corpus7. 5 See https://en.wikipedia.org/wiki/Fair_use 6 See https://www.jisc.ac. uk/guides/text-and-data-mining-copyright-exception 7 See http://commoncrawl.org/the ...","url":["https://www.researchgate.net/profile/Marina_Santini/publication/315390867_A_Web_Corpus_for_eCare_Collection_Annotation_and_Learning_-_Preliminary_Results_-/links/58cfb829458515b6ed8c1527/A-Web-Corpus-for-eCare-Collection-Annotation-and-Learning-Preliminary-Results.pdf"]}
|
| 20 |
{"year":"2017","title":"A Web Corpus for eCare: Collection, Lay Annotation and Learning-First Results","authors":["M Santini, A Jönsson, M Nyström, M Alirezai"],"snippet":"... from the web, neither within the \"web as a corpus\" experience, nor within the \"wacky\" initiative, nor with Common Crawl corpus9. ... for human language technology: introducing an LRE special section\" Lang Resources & Evaluation 2017 51 9See http://commoncrawl.org/the ...","url":["https://www.researchgate.net/profile/Marina_Santini/publication/318379265_A_Web_Corpus_for_eCare_Collection_Lay_Annotation_and_Learning_-First_Results-/links/596650de0f7e9b80917fea3e/A-Web-Corpus-for-eCare-Collection-Lay-Annotation-and-Learning-First-Results.pdf"]}
|
| 21 |
{"year":"2017","title":"A Web Page Distillation Strategy for Efficient Focused Crawling Based on Optimized Naïve Bayes (ONB) Classifier","authors":["AI Saleh, AE Abulwafa, MF Al Rahmawy - Applied Soft Computing, 2017"],"snippet":"The target of a focused crawler (FC) is to retrieve pages related to a specific domain of interest (DOI). However, FCs may be hasted if bad links were injected.","url":["http://www.sciencedirect.com/science/article/pii/S1568494616306536"]}
|
| 22 |
-
{"year":"2017","title":"Abstract Meaning Representation Parsing using LSTM Recurrent Neural Networks","authors":["M Hutchinson","W Foland, JH Martin - Proceedings of the 55th Annual Meeting of the …, 2017"],"snippet":"
|
| 23 |
{"year":"2017","title":"Accelerating Innovation Through Analogy Mining","authors":["T Hope, J Chan, A Kittur, D Shahaf - arXiv preprint arXiv:1706.05585, 2017"],"snippet":"... In more formal terms, let wi = (w1 i ,w2 i ,...,wT i) be the sequence of GloVe [27] word vectors (pre-trained on Common Crawl web data), representing (x1 i ,x2 i ,...,xT i ). We select all xi word vectors for which ˜p j ik = 1(˜m j ik = 1) for some k, and concatenate them into one ...","url":["https://arxiv.org/pdf/1706.05585"]}
|
| 24 |
{"year":"2017","title":"Accurate Sentence Matching with Hybrid Siamese Networks","authors":["M Nicosia, A Moschitti - Proceedings of the 2017 ACM on Conference on …, 2017"],"snippet":"… Their training split contains 384,348 pairs, and the balanced development and test sets contain 10,000 pairs each. The embeddings are a subset of the 300-dimensional GloVe word vectors pretrained on the Common Crawl corpus, 3 covering the Quora dataset vocabulary …","url":["http://dl.acm.org/citation.cfm?id=3133156"]}
|
| 25 |
{"year":"2017","title":"Acquiring Common Sense Spatial Knowledge through Implicit Spatial Templates","authors":["G Collell, L Van Gool, MF Moens - arXiv preprint arXiv:1711.06821, 2017"],"snippet":"… 4.5 Word embeddings We use 300-dimensional GloVe word embeddings (Pennington, Socher, and Manning 2014) pre-trained on the Common Crawl corpus (consisting of 840B-tokens), which we obtain from the authors' website.8 …","url":["https://arxiv.org/pdf/1711.06821"]}
|
|
@@ -74,7 +74,7 @@
|
|
| 74 |
{"year":"2017","title":"Common Crawled Web Corpora: Constructing corpora from large amounts of web data","authors":["KB Kristoffersen - 2017"],"snippet":"… Additionally, by using data provided by the Common Crawl Foundation, I develop a new very large English corpus with more than 135 billion tokens … 3 Exploring the Common Crawl 27 3.1 The data . . . . . 27 3.1.1 A note on scale …","url":["https://www.duo.uio.no/bitstream/handle/10852/57836/Kristoffersen_MSc2.pdf?sequence=5"]}
|
| 75 |
{"year":"2017","title":"Composition of Compound Nouns Using Distributional Semantics","authors":["K Yee, J Kalita"],"snippet":"... word2vec 300 3,000,000 100.00 bn Google News GloVe 300 400,000 42.00 bn Common Crawl HPCA 200 178,080 1.65 bn enWiki+Reuters +WSJ CW 50 130,000 0.85 bn enWiki+Reuters RCV1 word2vec 500 30,025 100 mn BNC word2vec 500 19,679 120 mn esWiki ...","url":["http://www.cs.uccs.edu/~jkalita/papers/2016/KyraYeeICON2016.pdf"]}
|
| 76 |
{"year":"2017","title":"Compressed Nonparametric Language Modelling","authors":["E Shareghi, G Haffari, T Cohn"],"snippet":"Page 1. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) 2701 Compressed Nonparametric Language Modelling Ehsan Shareghi,♣ Gholamreza Haffari,♣ Trevor Cohn♠ ♣ Faculty ...","url":["http://static.ijcai.org/proceedings-2017/0376.pdf"]}
|
| 77 |
-
{"year":"2017","title":"
|
| 78 |
{"year":"2017","title":"Compression with the tudocomp Framework","authors":["P Dinklage, J Fischer, D Köppl, M Löbel, K Sadakane - arXiv preprint arXiv: …, 2017"],"snippet":"Page 1. Compression with the tudocomp Framework Patrick Dinklage1, Johannes Fischer1, Dominik Köppl1, Marvin Löbel1, and Kunihiko Sadakane2 1 Department of Computer Science, TU Dortmund, Germany, pdinklag@gmail ...","url":["https://arxiv.org/pdf/1702.07577"]}
|
| 79 |
{"year":"2017","title":"Concept/Theme Roll-Up","authors":["T Sahay, R Tadishetti, A Mehta, S Jadon - 2017"],"snippet":"... representation. For word embeddings, we used GloVe trained on a common crawl corpus, containing 1900000 words in its vocabulary. ... phrases. For words, the weights were initialized with GloVe embeddings trained on the common-crawl corpus. ...","url":["https://people.cs.umass.edu/~tsahay/lexalytics_report.pdf"]}
|
| 80 |
{"year":"2017","title":"ConceptNet at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge","authors":["R Speer, J Lowry-Duda - arXiv preprint arXiv:1704.03560, 2017"],"snippet":"... The first source is the word2vec Google News embeddings2, and the second is the GloVe 1.2 embeddings that were trained on 840 billion tokens of the Common Crawl3. Because the input embeddings are only in En- glish, the vectors in other languages depended en- tirely on ...","url":["https://arxiv.org/pdf/1704.03560"]}
|
|
@@ -199,7 +199,7 @@
|
|
| 199 |
{"year":"2017","title":"Natural Language Question-Answering using Deep Learning","authors":["B Liu, F Lyu, R Roy"],"snippet":"... We experimented with both fixed 193 CommonCrawl.840B.300d pretrained word vectors and GLoVE.6B.100d pretrained word 194 vectors (Pennington, Socher, & Manning, 2015) 195 We enforce a fixed question length of 22 words, and fixed context length of 300 words. ...","url":["https://pdfs.semanticscholar.org/505a/ed7c751eb57bf5e59ab1cedc49448376b7d5.pdf"]}
|
| 200 |
{"year":"2017","title":"Neural Lie Detection with the CSC Deceptive Speech Dataset","authors":["S Desai, M Siegelman, Z Maurer"],"snippet":"... Each acoustic feature frame was 34 dimensional and each speaker-dependent frame was 68 dimensional. Lexical features were encoded using GloVe Wikipedia and CommonCrawl 100-dimensional embeddings[9] based on the transcripts provided with the dataset. ...","url":["http://web.stanford.edu/class/cs224s/reports/Shloka_Desai.pdf"]}
|
| 201 |
{"year":"2017","title":"Neural Machine Translation Leveraging Phrase-based Models in a Hybrid Search","authors":["L Dahlmann, E Matusov, P Petrushkov, S Khadivi - arXiv preprint arXiv:1708.03271, 2017"],"snippet":"... For development and test sets, two reference translations are used. The German→English system is trained on parallel corpora provided for the constrained WMT 2017 evaluation (Europarl, Common Crawl, and others). We ...","url":["https://arxiv.org/pdf/1708.03271"]}
|
| 202 |
-
{"year":"2017","title":"Neural Machine Translation Training in a Multi-Domain Scenario","authors":["H Sajjad, N Durrani, F Dalvi, Y Belinkov, S Vogel - arXiv preprint arXiv:1708.08712, 2017","HSNDF Dalvi, Y Belinkov, S Vogel"],"snippet":"... For German-English, we use the Europarl (EP), and the Common Crawl (CC) corpora made available for the 1st Conference on Statistical Machine
|
| 203 |
{"year":"2017","title":"Neural Machine Translation with LSTM's","authors":["J Dhaliwal"],"snippet":"... 3. dev08 11 - old dev dat from 2008 to 2011 (0.3M) 4. crawl - data from common crawl (90M) 5. ccb2 - 109 parallel corpus (81M) ... 3. dev08 11 - old dev dat from 2008 to 2011 (0.3M) 4. crawl - data from common crawl (90M) 5. ccb2 pc30109 parallel corpus (81M) ...","url":["https://people.umass.edu/~jdhaliwal/files/s2s.pdf"]}
|
| 204 |
{"year":"2017","title":"Neural Networks and Spelling Features for Native Language Identification","authors":["J Bjerva, G Grigonyte, R Ostling, B Plank - Bronze Sponsors, 2017"],"snippet":"... PoS tags are represented by 64-dimensional embeddings, initialised randomly; word tokens by 300-dimensional embeddings, initialised with GloVe (Pennington et al., 2014) em- beddings trained on 840 billion words of English web data from the Common Crawl project. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-50.pdf#page=255"]}
|
| 205 |
{"year":"2017","title":"Neural vs. Phrase-Based Machine Translation in a Multi-Domain Scenario","authors":["MA Farajian, M Turchi, M Negri, N Bertoldi, M Federico - EACL 2017, 2017"],"snippet":"... K PHP 38.4 K 259.0 K 9.7 K Ubuntu 9.0 K 47.7 K 8.6 K UN-TM 40.3 K 913.8 K 12.5 K CommonCrawl 2.6 M ... in particular NMT solutions, we used CommonCrawl and Europarl corpora as out-domain data in addition to the above-mentioned domain-specific corpora, resulting in ...","url":["http://www.aclweb.org/anthology/E/E17/E17-2.pdf#page=312"]}
|
|
@@ -209,7 +209,7 @@
|
|
| 209 |
{"year":"2017","title":"Novel Ranking-Based Lexical Similarity Measure for Word Embedding","authors":["J Dutkiewicz, C Jędrzejek - arXiv preprint arXiv:1712.08439, 2017"],"snippet":"… 4.1 Experimental setup We use the unmodified vector space model trained on 840 billion words from Common Crawl data with the GloVe algorithm introduced in Pennington et al. (2014). The model consists of 2.2 million unique vectors; Each vector consists of 300 components …","url":["https://arxiv.org/pdf/1712.08439"]}
|
| 210 |
{"year":"2017","title":"NRC Machine Translation System for WMT 2017","authors":["C Lo, S Larkin, B Chen, D Stewart, C Cherry, R Kuhn… - WMT 2017, 2017"],"snippet":"... 2 Russian-English news translation We used all the Russian-English parallel corpora available for the constrained news translation task. They include the CommonCrawl corpus, the NewsCommentary v12 corpus, the Yandex corpus and the Wikipedia headlines corpus. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=354"]}
|
| 211 |
{"year":"2017","title":"On the Effective Use of Pretraining for Natural Language Inference","authors":["I Cases, MT Luong, C Potts - arXiv preprint arXiv:1710.02076, 2017"],"snippet":"... a 1We used the publicly released embeddings, trained with Common Crawl 840B tokens for GloVe (http:// nlp.stanford.edu/projects/glove/) and Google News 42B for word2vec https://code.google.com/ archive/p/word2vec/. Although ...","url":["https://arxiv.org/pdf/1710.02076"]}
|
| 212 |
-
{"year":"2017","title":"Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks","authors":["N Reimers, I Gurevych - arXiv preprint arXiv:1707.06799, 2017","NRI Gurevych"],"
|
| 213 |
{"year":"2017","title":"Parallel Training Data Selection for Conversational Machine Translation","authors":["X Niu, M Carpuat"],"snippet":"... Corpus # Sentences # Words (en/fr) OpenSubtitles 33.5 M 284.0 M / 268.3 M MultiUN 13.2 M 367.1 M / 432.3 M Common Crawl 3.2 M 81.1 M / 91.3 M Europarl v7 2.0 M 55.7 M / 61.9 M Wikipedia 396 k 9.7 M / 8.7 M TED corpus 207 k 4.5 M / 4.8 M News Commentary v10 199 k ...","url":["https://pdfs.semanticscholar.org/fdf6/ae86229f51893dd6e33579511489af4a5eb7.pdf"]}
|
| 214 |
{"year":"2017","title":"Passfault: an Open Source Tool for Measuring Password Complexity and Strength","authors":["BA Rodrigues, JRB Paiva, VM Gomes, C Morris…"],"snippet":"... Wikipedia: The full text of Wikipedia in 2015. • Reddit: The corpus of Reddit comments through May 2015. • CCrawl: Text extracted from the Common Crawl and language-detected with cld2. Page 6. ACKNOWLEDGMENTS ...","url":["https://www.owasp.org/images/1/13/Artigo-Passfault.pdf"]}
|
| 215 |
{"year":"2017","title":"Predictor-Estimator using Multilevel Task Learning with Stack Propagation for Neural Quality Estimation","authors":["H Kim, JH Lee, SH Na - WMT 2017, 2017"],"snippet":"... allel corpora including the Europarl corpus, common crawl corpus, news commentary, rapid corpus of EU press releases for the WMT17 translation task3, and src-pe (source sentences-their target post-editions) pairs for the WMT17 QE task. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=586"]}
|
|
@@ -245,14 +245,14 @@
|
|
| 245 |
{"year":"2017","title":"Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation","authors":["D Cer, M Diab, E Agirre, I Lopez-Gazpio, L Specia - Proceedings of the 11th …, 2017"],"snippet":"Page 1. Proceedings of the 11th International Workshop on Semantic Evaluations (SemEval-2017), pages 1–14, Vancouver, Canada, August 3 - 4, 2017. cO2017 Association for Computational Linguistics SemEval-2017 Task ...","url":["http://nlp.arizona.edu/SemEval-2017/pdf/SemEval001.pdf"]}
|
| 246 |
{"year":"2017","title":"Sentence Embedding for Neural Machine Translation Domain Adaptation","authors":["R Wang, A Finch, M Utiyama, E Sumita"],"snippet":"... Out- of-domain corpora contained Common Crawl, Europarl v7, News Commentary v10 and United Nation (UN) EN-FR parallel corpora.4 • NIST 2006 Chinese (ZH) to English corpus 5 was used as the in-domain training corpus, following the settings of (Wang et al., 2014). ...","url":["https://www.aclweb.org/anthology/P/P17/P17-2089.pdf"]}
|
| 247 |
{"year":"2017","title":"SentiHeros at SemEval-2017 Task 5: An application of Sentiment Analysis on Financial Tweets","authors":["N Tabari, A Seyeditabari, W Zadrozny"],"snippet":"... In two separate experiments, we used vectors based on the Common Crawl (840B tokens, 2.2M vo- cab, cased, 300 dimensions), and the pre-trained word vectors for Twitter (2B tweets, 27B tokens, 1.2M vocab, 200 dimensions). ...","url":["http://nlp.arizona.edu/SemEval-2017/pdf/SemEval146.pdf"]}
|
| 248 |
-
{"year":"2017","title":"Shallow reading with Deep Learning: Predicting popularity of online content using only its title","authors":["K Marasek, P law Rokita","W Stokowiec, T Trzcinski, K Wolk, K Marasek, P Rokita - arXiv preprint arXiv: …, 2017"],"snippet":"... As a text embedding in our experiments, we use publicly available GloVe word vectors [
|
| 249 |
{"year":"2017","title":"Simple Dynamic Coattention Networks","authors":["W Wu"],"snippet":"... unk〉. This affected the accuracy of predicted answers, as seen from Table 3. To reduced the number of unknown words, the Common Crawl GloVe vectors, which has a larger vocabulary, should be used instead. Document ...","url":["https://pdfs.semanticscholar.org/6a79/6c1c9c30913cb24d64939f90dcb06fa82be7.pdf"]}
|
| 250 |
{"year":"2017","title":"Six Challenges for Neural Machine Translation","authors":["P Koehn, R Knowles - arXiv preprint arXiv:1706.03872, 2017"],"snippet":"... BLEU scores of 34.5 on the WMT 2016 news test set (for the NMT model, this reflects the BLEU score re- sulting from translation with a beam size of 1). We use a single corpus for computing our lexical frequency counts (a concatenation of Common Crawl, Europarl, and News ...","url":["https://arxiv.org/pdf/1706.03872"]}
|
| 251 |
{"year":"2017","title":"Sockeye: A Toolkit for Neural Machine Translation","authors":["F Hieber, T Domhan, M Denkowski, D Vilar, A Sokolov… - arXiv preprint arXiv …, 2017"],"snippet":"… 9 Page 10. EN→DE LV→EN Dataset Sentences Tokens Types Sentences Tokens Types Europarl v7/v8 1,905,421 91,658,252 862,710 637,687 27,256,803 437,914 Common Crawl 2,394,616 97,473,856 3,655,645 - - - News Comm. v12 270,088 11,990,594 460,220 …","url":["https://arxiv.org/pdf/1712.05690"]}
|
| 252 |
{"year":"2017","title":"Specialising Word Vectors for Lexical Entailment","authors":["I Vulić, N Mrkšić - arXiv preprint arXiv:1710.06371, 2017"],"snippet":"... experiment with a variety of well-known, publicly available English word vectors: 1) Skip-Gram with Negative Sampling (SGNS) (Mikolov et al., 2013) trained on the Polyglot Wikipedia (Al-Rfou et al., 2013) by Levy and Goldberg (2014); 2) GLOVE Common Crawl (Pennington et ...","url":["https://arxiv.org/pdf/1710.06371"]}
|
| 253 |
{"year":"2017","title":"SpreadCluster: Recovering Versioned Spreadsheets through Similarity-Based Clustering","authors":["L Xu, W Dou, C Gao, J Wang, J Wei, H Zhong, T Huang"],"snippet":"Page 1. SpreadCluster: Recovering Versioned Spreadsheets through Similarity-Based Clustering Liang Xu1,2, Wensheng Dou1*, Chushu Gao1, Jie Wang1,2, Jun Wei1,2, Hua Zhong1, Tao Huang1 1State Key Laboratory of ...","url":["http://www.tcse.cn/~wsdou/papers/2017-msr-spreadcluster.pdf"]}
|
| 254 |
{"year":"2017","title":"SQuAD Question Answering using Multi-Perspective Matching","authors":["Z Maurer, S Desai, S Usmani"],"snippet":"... in some cases. In terms of future work to improve on our models, we can use 840B Common Crawl GloVe word vectors rather than the Glove word vectors pretrained on Wikipedia 2014 and Gigaword5. Given additional computational ...","url":["https://pdfs.semanticscholar.org/3b1a/a646bdc6daab268f6763b829686b00263333.pdf"]}
|
| 255 |
-
{"year":"2017","title":"Story Cloze Ending Selection Baselines and Data Examination","authors":["M Armstrong","T Mihaylov, A Frank - arXiv preprint arXiv:1703.04330, 2017"],"snippet":"
|
| 256 |
{"year":"2017","title":"Stronger Baselines for Trustable Results in Neural Machine Translation","authors":["M Denkowski, G Neubig - arXiv preprint arXiv:1706.09733, 2017"],"snippet":"... Scenario Size (sent) Sources WMT German-English 4,562,102 Europarl, Common Crawl, news commentary WMT English-Finnish 2,079,842 Europarl, Wikipedia titles WMT Romanian-English 612,422 Europarl, SETimes IWSLT English-French 220,400 TED talks IWSLT Czech ...","url":["https://arxiv.org/pdf/1706.09733"]}
|
| 257 |
{"year":"2017","title":"Structured Attention Networks","authors":["Y Kim, C Denton, L Hoang, AM Rush - arXiv preprint arXiv:1702.00887, 2017"],"snippet":"Page 1. Under review as a conference paper at ICLR 2017 STRUCTURED ATTENTION NETWORKS Yoon Kim∗ Carl Denton∗ Luong Hoang Alexander M. Rush {yoonkim@seas,carldenton@college,lhoang@g,srush@seas ...","url":["https://arxiv.org/pdf/1702.00887"]}
|
| 258 |
{"year":"2017","title":"Supervised Learning of Universal Sentence Representations from Natural Language Inference Data","authors":["A Conneau, D Kiela, H Schwenk, L Barrault, A Bordes - arXiv preprint arXiv: …, 2017"],"snippet":"... 512 hidden units. We use opensource GloVe vectors trained on Common Crawl 840B2 with 300 dimensions as fixed word embeddings and initialize other word vectors to random values sampled from U(-0.1,0.1). Input sen ...","url":["https://arxiv.org/pdf/1705.02364"]}
|
|
|
|
| 19 |
{"year":"2017","title":"A Web Corpus for eCare: Collection, Annotation and Learning-Preliminary Results-DRAFT: 20 March 2017","authors":["M Santini, M Alirezai, M Nyström, A Jönsson"],"snippet":"... web, neither within the ”web as a corpus” experience, nor within the ”wacky” initiative, nor with Common Crawl corpus7. 5 See https://en.wikipedia.org/wiki/Fair_use 6 See https://www.jisc.ac. uk/guides/text-and-data-mining-copyright-exception 7 See http://commoncrawl.org/the ...","url":["https://www.researchgate.net/profile/Marina_Santini/publication/315390867_A_Web_Corpus_for_eCare_Collection_Annotation_and_Learning_-_Preliminary_Results_-/links/58cfb829458515b6ed8c1527/A-Web-Corpus-for-eCare-Collection-Annotation-and-Learning-Preliminary-Results.pdf"]}
|
| 20 |
{"year":"2017","title":"A Web Corpus for eCare: Collection, Lay Annotation and Learning-First Results","authors":["M Santini, A Jönsson, M Nyström, M Alirezai"],"snippet":"... from the web, neither within the \"web as a corpus\" experience, nor within the \"wacky\" initiative, nor with Common Crawl corpus9. ... for human language technology: introducing an LRE special section\" Lang Resources & Evaluation 2017 51 9See http://commoncrawl.org/the ...","url":["https://www.researchgate.net/profile/Marina_Santini/publication/318379265_A_Web_Corpus_for_eCare_Collection_Lay_Annotation_and_Learning_-First_Results-/links/596650de0f7e9b80917fea3e/A-Web-Corpus-for-eCare-Collection-Lay-Annotation-and-Learning-First-Results.pdf"]}
|
| 21 |
{"year":"2017","title":"A Web Page Distillation Strategy for Efficient Focused Crawling Based on Optimized Naïve Bayes (ONB) Classifier","authors":["AI Saleh, AE Abulwafa, MF Al Rahmawy - Applied Soft Computing, 2017"],"snippet":"The target of a focused crawler (FC) is to retrieve pages related to a specific domain of interest (DOI). However, FCs may be hasted if bad links were injected.","url":["http://www.sciencedirect.com/science/article/pii/S1568494616306536"]}
|
| 22 |
+
{"year":"2017","title":"Abstract Meaning Representation Parsing using LSTM Recurrent Neural Networks","authors":["M Hutchinson","W Foland, JH Martin - Proceedings of the 55th Annual Meeting of the …, 2017"],"snippet":"… We start with 300 dimension GloVe representations (Pennington et al., 2014) trained on the 840 billion word common crawl (Smith et al., 2013). We added two binary dimensions: one for out of vocabulary words, and one for padding, resulting …","url":["http://www.aclweb.org/anthology/P17-1043","https://zdoc.pub/abstract-meaning-representation-parsing-using-lstm-recurrent.html"]}
|
| 23 |
{"year":"2017","title":"Accelerating Innovation Through Analogy Mining","authors":["T Hope, J Chan, A Kittur, D Shahaf - arXiv preprint arXiv:1706.05585, 2017"],"snippet":"... In more formal terms, let wi = (w1 i ,w2 i ,...,wT i) be the sequence of GloVe [27] word vectors (pre-trained on Common Crawl web data), representing (x1 i ,x2 i ,...,xT i ). We select all xi word vectors for which ˜p j ik = 1(˜m j ik = 1) for some k, and concatenate them into one ...","url":["https://arxiv.org/pdf/1706.05585"]}
|
| 24 |
{"year":"2017","title":"Accurate Sentence Matching with Hybrid Siamese Networks","authors":["M Nicosia, A Moschitti - Proceedings of the 2017 ACM on Conference on …, 2017"],"snippet":"… Their training split contains 384,348 pairs, and the balanced development and test sets contain 10,000 pairs each. The embeddings are a subset of the 300-dimensional GloVe word vectors pretrained on the Common Crawl corpus, 3 covering the Quora dataset vocabulary …","url":["http://dl.acm.org/citation.cfm?id=3133156"]}
|
| 25 |
{"year":"2017","title":"Acquiring Common Sense Spatial Knowledge through Implicit Spatial Templates","authors":["G Collell, L Van Gool, MF Moens - arXiv preprint arXiv:1711.06821, 2017"],"snippet":"… 4.5 Word embeddings We use 300-dimensional GloVe word embeddings (Pennington, Socher, and Manning 2014) pre-trained on the Common Crawl corpus (consisting of 840B-tokens), which we obtain from the authors' website.8 …","url":["https://arxiv.org/pdf/1711.06821"]}
|
|
|
|
| 74 |
{"year":"2017","title":"Common Crawled Web Corpora: Constructing corpora from large amounts of web data","authors":["KB Kristoffersen - 2017"],"snippet":"… Additionally, by using data provided by the Common Crawl Foundation, I develop a new very large English corpus with more than 135 billion tokens … 3 Exploring the Common Crawl 27 3.1 The data . . . . . 27 3.1.1 A note on scale …","url":["https://www.duo.uio.no/bitstream/handle/10852/57836/Kristoffersen_MSc2.pdf?sequence=5"]}
|
| 75 |
{"year":"2017","title":"Composition of Compound Nouns Using Distributional Semantics","authors":["K Yee, J Kalita"],"snippet":"... word2vec 300 3,000,000 100.00 bn Google News GloVe 300 400,000 42.00 bn Common Crawl HPCA 200 178,080 1.65 bn enWiki+Reuters +WSJ CW 50 130,000 0.85 bn enWiki+Reuters RCV1 word2vec 500 30,025 100 mn BNC word2vec 500 19,679 120 mn esWiki ...","url":["http://www.cs.uccs.edu/~jkalita/papers/2016/KyraYeeICON2016.pdf"]}
|
| 76 |
{"year":"2017","title":"Compressed Nonparametric Language Modelling","authors":["E Shareghi, G Haffari, T Cohn"],"snippet":"Page 1. Proceedings of the Twenty-Sixth International Joint Conference on Artificial Intelligence (IJCAI-17) 2701 Compressed Nonparametric Language Modelling Ehsan Shareghi,♣ Gholamreza Haffari,♣ Trevor Cohn♠ ♣ Faculty ...","url":["http://static.ijcai.org/proceedings-2017/0376.pdf"]}
|
| 77 |
+
{"year":"2017","title":"Compressing Word Embeddings via Deep Compositional Code Learning","authors":["R Shu, H Nakayama - arXiv preprint arXiv:1711.01068, 2017","RSH Nakayama"],"snippet":"... purpose. We lowercase and tokenize all texts with the nltk package. We choose the 300-dimensional uncased Glove word vectors (trained on 42B tokens of Common Crawl data) as our baseline embeddings. The vocabulary ...","url":["https://arxiv.org/pdf/1711.01068","https://pdfs.semanticscholar.org/1713/d05f9d5861cac4d5ec73151667cb03a42bfc.pdf"]}
|
| 78 |
{"year":"2017","title":"Compression with the tudocomp Framework","authors":["P Dinklage, J Fischer, D Köppl, M Löbel, K Sadakane - arXiv preprint arXiv: …, 2017"],"snippet":"Page 1. Compression with the tudocomp Framework Patrick Dinklage1, Johannes Fischer1, Dominik Köppl1, Marvin Löbel1, and Kunihiko Sadakane2 1 Department of Computer Science, TU Dortmund, Germany, pdinklag@gmail ...","url":["https://arxiv.org/pdf/1702.07577"]}
|
| 79 |
{"year":"2017","title":"Concept/Theme Roll-Up","authors":["T Sahay, R Tadishetti, A Mehta, S Jadon - 2017"],"snippet":"... representation. For word embeddings, we used GloVe trained on a common crawl corpus, containing 1900000 words in its vocabulary. ... phrases. For words, the weights were initialized with GloVe embeddings trained on the common-crawl corpus. ...","url":["https://people.cs.umass.edu/~tsahay/lexalytics_report.pdf"]}
|
| 80 |
{"year":"2017","title":"ConceptNet at SemEval-2017 Task 2: Extending Word Embeddings with Multilingual Relational Knowledge","authors":["R Speer, J Lowry-Duda - arXiv preprint arXiv:1704.03560, 2017"],"snippet":"... The first source is the word2vec Google News embeddings2, and the second is the GloVe 1.2 embeddings that were trained on 840 billion tokens of the Common Crawl3. Because the input embeddings are only in En- glish, the vectors in other languages depended en- tirely on ...","url":["https://arxiv.org/pdf/1704.03560"]}
|
|
|
|
| 199 |
{"year":"2017","title":"Natural Language Question-Answering using Deep Learning","authors":["B Liu, F Lyu, R Roy"],"snippet":"... We experimented with both fixed 193 CommonCrawl.840B.300d pretrained word vectors and GLoVE.6B.100d pretrained word 194 vectors (Pennington, Socher, & Manning, 2015) 195 We enforce a fixed question length of 22 words, and fixed context length of 300 words. ...","url":["https://pdfs.semanticscholar.org/505a/ed7c751eb57bf5e59ab1cedc49448376b7d5.pdf"]}
|
| 200 |
{"year":"2017","title":"Neural Lie Detection with the CSC Deceptive Speech Dataset","authors":["S Desai, M Siegelman, Z Maurer"],"snippet":"... Each acoustic feature frame was 34 dimensional and each speaker-dependent frame was 68 dimensional. Lexical features were encoded using GloVe Wikipedia and CommonCrawl 100-dimensional embeddings[9] based on the transcripts provided with the dataset. ...","url":["http://web.stanford.edu/class/cs224s/reports/Shloka_Desai.pdf"]}
|
| 201 |
{"year":"2017","title":"Neural Machine Translation Leveraging Phrase-based Models in a Hybrid Search","authors":["L Dahlmann, E Matusov, P Petrushkov, S Khadivi - arXiv preprint arXiv:1708.03271, 2017"],"snippet":"... For development and test sets, two reference translations are used. The German→English system is trained on parallel corpora provided for the constrained WMT 2017 evaluation (Europarl, Common Crawl, and others). We ...","url":["https://arxiv.org/pdf/1708.03271"]}
|
| 202 |
+
{"year":"2017","title":"Neural Machine Translation Training in a Multi-Domain Scenario","authors":["H Sajjad, N Durrani, F Dalvi, Y Belinkov, S Vogel - arXiv preprint arXiv:1708.08712, 2017","HSNDF Dalvi, Y Belinkov, S Vogel"],"snippet":"... For German-English, we use the Europarl (EP), and the Common Crawl (CC) corpora made available for the 1st Conference on Statistical Machine Translation3 as out- of-domain corpus. ... EP = Europarl, CC = Common Crawl, UN = United Nations. ...","url":["https://arxiv.org/pdf/1708.08712","https://www.researchgate.net/profile/Nadir_Durrani/publication/319349687_Neural_Machine_Translation_Training_in_a_Multi-Domain_Scenario/links/59d0f2a3aca2721f43673f75/Neural-Machine-Translation-Training-in-a-Multi-Domain-Scenario.pdf"]}
|
| 203 |
{"year":"2017","title":"Neural Machine Translation with LSTM's","authors":["J Dhaliwal"],"snippet":"... 3. dev08 11 - old dev dat from 2008 to 2011 (0.3M) 4. crawl - data from common crawl (90M) 5. ccb2 - 109 parallel corpus (81M) ... 3. dev08 11 - old dev dat from 2008 to 2011 (0.3M) 4. crawl - data from common crawl (90M) 5. ccb2 pc30109 parallel corpus (81M) ...","url":["https://people.umass.edu/~jdhaliwal/files/s2s.pdf"]}
|
| 204 |
{"year":"2017","title":"Neural Networks and Spelling Features for Native Language Identification","authors":["J Bjerva, G Grigonyte, R Ostling, B Plank - Bronze Sponsors, 2017"],"snippet":"... PoS tags are represented by 64-dimensional embeddings, initialised randomly; word tokens by 300-dimensional embeddings, initialised with GloVe (Pennington et al., 2014) em- beddings trained on 840 billion words of English web data from the Common Crawl project. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-50.pdf#page=255"]}
|
| 205 |
{"year":"2017","title":"Neural vs. Phrase-Based Machine Translation in a Multi-Domain Scenario","authors":["MA Farajian, M Turchi, M Negri, N Bertoldi, M Federico - EACL 2017, 2017"],"snippet":"... K PHP 38.4 K 259.0 K 9.7 K Ubuntu 9.0 K 47.7 K 8.6 K UN-TM 40.3 K 913.8 K 12.5 K CommonCrawl 2.6 M ... in particular NMT solutions, we used CommonCrawl and Europarl corpora as out-domain data in addition to the above-mentioned domain-specific corpora, resulting in ...","url":["http://www.aclweb.org/anthology/E/E17/E17-2.pdf#page=312"]}
|
|
|
|
| 209 |
{"year":"2017","title":"Novel Ranking-Based Lexical Similarity Measure for Word Embedding","authors":["J Dutkiewicz, C Jędrzejek - arXiv preprint arXiv:1712.08439, 2017"],"snippet":"… 4.1 Experimental setup We use the unmodified vector space model trained on 840 billion words from Common Crawl data with the GloVe algorithm introduced in Pennington et al. (2014). The model consists of 2.2 million unique vectors; Each vector consists of 300 components …","url":["https://arxiv.org/pdf/1712.08439"]}
|
| 210 |
{"year":"2017","title":"NRC Machine Translation System for WMT 2017","authors":["C Lo, S Larkin, B Chen, D Stewart, C Cherry, R Kuhn… - WMT 2017, 2017"],"snippet":"... 2 Russian-English news translation We used all the Russian-English parallel corpora available for the constrained news translation task. They include the CommonCrawl corpus, the NewsCommentary v12 corpus, the Yandex corpus and the Wikipedia headlines corpus. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=354"]}
|
| 211 |
{"year":"2017","title":"On the Effective Use of Pretraining for Natural Language Inference","authors":["I Cases, MT Luong, C Potts - arXiv preprint arXiv:1710.02076, 2017"],"snippet":"... a 1We used the publicly released embeddings, trained with Common Crawl 840B tokens for GloVe (http:// nlp.stanford.edu/projects/glove/) and Google News 42B for word2vec https://code.google.com/ archive/p/word2vec/. Although ...","url":["https://arxiv.org/pdf/1710.02076"]}
|
| 212 |
+
{"year":"2017","title":"Optimal Hyperparameters for Deep LSTM-Networks for Sequence Labeling Tasks","authors":["N Reimers, I Gurevych - arXiv preprint arXiv:1707.06799, 2017","NRI Gurevych"],"url":["https://arxiv.org/pdf/1707.06799","https://www.arxiv-vanity.com/papers/1707.06799v2/"]}
|
| 213 |
{"year":"2017","title":"Parallel Training Data Selection for Conversational Machine Translation","authors":["X Niu, M Carpuat"],"snippet":"... Corpus # Sentences # Words (en/fr) OpenSubtitles 33.5 M 284.0 M / 268.3 M MultiUN 13.2 M 367.1 M / 432.3 M Common Crawl 3.2 M 81.1 M / 91.3 M Europarl v7 2.0 M 55.7 M / 61.9 M Wikipedia 396 k 9.7 M / 8.7 M TED corpus 207 k 4.5 M / 4.8 M News Commentary v10 199 k ...","url":["https://pdfs.semanticscholar.org/fdf6/ae86229f51893dd6e33579511489af4a5eb7.pdf"]}
|
| 214 |
{"year":"2017","title":"Passfault: an Open Source Tool for Measuring Password Complexity and Strength","authors":["BA Rodrigues, JRB Paiva, VM Gomes, C Morris…"],"snippet":"... Wikipedia: The full text of Wikipedia in 2015. • Reddit: The corpus of Reddit comments through May 2015. • CCrawl: Text extracted from the Common Crawl and language-detected with cld2. Page 6. ACKNOWLEDGMENTS ...","url":["https://www.owasp.org/images/1/13/Artigo-Passfault.pdf"]}
|
| 215 |
{"year":"2017","title":"Predictor-Estimator using Multilevel Task Learning with Stack Propagation for Neural Quality Estimation","authors":["H Kim, JH Lee, SH Na - WMT 2017, 2017"],"snippet":"... allel corpora including the Europarl corpus, common crawl corpus, news commentary, rapid corpus of EU press releases for the WMT17 translation task3, and src-pe (source sentences-their target post-editions) pairs for the WMT17 QE task. ...","url":["http://www.aclweb.org/anthology/W/W17/W17-47.pdf#page=586"]}
|
|
|
|
| 245 |
{"year":"2017","title":"Semeval-2017 task 1: Semantic textual similarity multilingual and crosslingual focused evaluation","authors":["D Cer, M Diab, E Agirre, I Lopez-Gazpio, L Specia - Proceedings of the 11th …, 2017"],"snippet":"Page 1. Proceedings of the 11th International Workshop on Semantic Evaluations (SemEval-2017), pages 1–14, Vancouver, Canada, August 3 - 4, 2017. cO2017 Association for Computational Linguistics SemEval-2017 Task ...","url":["http://nlp.arizona.edu/SemEval-2017/pdf/SemEval001.pdf"]}
|
| 246 |
{"year":"2017","title":"Sentence Embedding for Neural Machine Translation Domain Adaptation","authors":["R Wang, A Finch, M Utiyama, E Sumita"],"snippet":"... Out- of-domain corpora contained Common Crawl, Europarl v7, News Commentary v10 and United Nation (UN) EN-FR parallel corpora.4 • NIST 2006 Chinese (ZH) to English corpus 5 was used as the in-domain training corpus, following the settings of (Wang et al., 2014). ...","url":["https://www.aclweb.org/anthology/P/P17/P17-2089.pdf"]}
|
| 247 |
{"year":"2017","title":"SentiHeros at SemEval-2017 Task 5: An application of Sentiment Analysis on Financial Tweets","authors":["N Tabari, A Seyeditabari, W Zadrozny"],"snippet":"... In two separate experiments, we used vectors based on the Common Crawl (840B tokens, 2.2M vo- cab, cased, 300 dimensions), and the pre-trained word vectors for Twitter (2B tweets, 27B tokens, 1.2M vocab, 200 dimensions). ...","url":["http://nlp.arizona.edu/SemEval-2017/pdf/SemEval146.pdf"]}
|
| 248 |
+
{"year":"2017","title":"Shallow reading with Deep Learning: Predicting popularity of online content using only its title","authors":["K Marasek, P law Rokita","W Stokowiec, T Trzcinski, K Wolk, K Marasek, P Rokita - arXiv preprint arXiv: …, 2017"],"snippet":"... As a text embedding in our experiments, we use publicly available GloVe word vectors [Pennington et al., 2014] pre-trained on two datasets: Wikipedia 2014 with Gigaword5 (W+G5) and Common Crawl (CC)6. Since their output dimensionality can be modified, we show the ...","url":["http://ii.pw.edu.pl/~ttrzcins/papers/ISMIS_2017_paper_57.pdf","https://arxiv.org/pdf/1707.06806"]}
|
| 249 |
{"year":"2017","title":"Simple Dynamic Coattention Networks","authors":["W Wu"],"snippet":"... unk〉. This affected the accuracy of predicted answers, as seen from Table 3. To reduced the number of unknown words, the Common Crawl GloVe vectors, which has a larger vocabulary, should be used instead. Document ...","url":["https://pdfs.semanticscholar.org/6a79/6c1c9c30913cb24d64939f90dcb06fa82be7.pdf"]}
|
| 250 |
{"year":"2017","title":"Six Challenges for Neural Machine Translation","authors":["P Koehn, R Knowles - arXiv preprint arXiv:1706.03872, 2017"],"snippet":"... BLEU scores of 34.5 on the WMT 2016 news test set (for the NMT model, this reflects the BLEU score re- sulting from translation with a beam size of 1). We use a single corpus for computing our lexical frequency counts (a concatenation of Common Crawl, Europarl, and News ...","url":["https://arxiv.org/pdf/1706.03872"]}
|
| 251 |
{"year":"2017","title":"Sockeye: A Toolkit for Neural Machine Translation","authors":["F Hieber, T Domhan, M Denkowski, D Vilar, A Sokolov… - arXiv preprint arXiv …, 2017"],"snippet":"… 9 Page 10. EN→DE LV→EN Dataset Sentences Tokens Types Sentences Tokens Types Europarl v7/v8 1,905,421 91,658,252 862,710 637,687 27,256,803 437,914 Common Crawl 2,394,616 97,473,856 3,655,645 - - - News Comm. v12 270,088 11,990,594 460,220 …","url":["https://arxiv.org/pdf/1712.05690"]}
|
| 252 |
{"year":"2017","title":"Specialising Word Vectors for Lexical Entailment","authors":["I Vulić, N Mrkšić - arXiv preprint arXiv:1710.06371, 2017"],"snippet":"... experiment with a variety of well-known, publicly available English word vectors: 1) Skip-Gram with Negative Sampling (SGNS) (Mikolov et al., 2013) trained on the Polyglot Wikipedia (Al-Rfou et al., 2013) by Levy and Goldberg (2014); 2) GLOVE Common Crawl (Pennington et ...","url":["https://arxiv.org/pdf/1710.06371"]}
|
| 253 |
{"year":"2017","title":"SpreadCluster: Recovering Versioned Spreadsheets through Similarity-Based Clustering","authors":["L Xu, W Dou, C Gao, J Wang, J Wei, H Zhong, T Huang"],"snippet":"Page 1. SpreadCluster: Recovering Versioned Spreadsheets through Similarity-Based Clustering Liang Xu1,2, Wensheng Dou1*, Chushu Gao1, Jie Wang1,2, Jun Wei1,2, Hua Zhong1, Tao Huang1 1State Key Laboratory of ...","url":["http://www.tcse.cn/~wsdou/papers/2017-msr-spreadcluster.pdf"]}
|
| 254 |
{"year":"2017","title":"SQuAD Question Answering using Multi-Perspective Matching","authors":["Z Maurer, S Desai, S Usmani"],"snippet":"... in some cases. In terms of future work to improve on our models, we can use 840B Common Crawl GloVe word vectors rather than the Glove word vectors pretrained on Wikipedia 2014 and Gigaword5. Given additional computational ...","url":["https://pdfs.semanticscholar.org/3b1a/a646bdc6daab268f6763b829686b00263333.pdf"]}
|
| 255 |
+
{"year":"2017","title":"Story Cloze Ending Selection Baselines and Data Examination","authors":["M Armstrong","T Mihaylov, A Frank - arXiv preprint arXiv:1703.04330, 2017"],"snippet":"... models. Using All features defined in Section 3.1, the word2vec vectors, trained on Google News 100B corpus perform best followed by ConcepNet enriched em- beddings and Glove trained on Common Crawl 840B. The ...","url":["https://arxiv.org/pdf/1703.04330","https://zdoc.pub/story-cloze-ending-selection-baselines-and-data-examination.html"]}
|
| 256 |
{"year":"2017","title":"Stronger Baselines for Trustable Results in Neural Machine Translation","authors":["M Denkowski, G Neubig - arXiv preprint arXiv:1706.09733, 2017"],"snippet":"... Scenario Size (sent) Sources WMT German-English 4,562,102 Europarl, Common Crawl, news commentary WMT English-Finnish 2,079,842 Europarl, Wikipedia titles WMT Romanian-English 612,422 Europarl, SETimes IWSLT English-French 220,400 TED talks IWSLT Czech ...","url":["https://arxiv.org/pdf/1706.09733"]}
|
| 257 |
{"year":"2017","title":"Structured Attention Networks","authors":["Y Kim, C Denton, L Hoang, AM Rush - arXiv preprint arXiv:1702.00887, 2017"],"snippet":"Page 1. Under review as a conference paper at ICLR 2017 STRUCTURED ATTENTION NETWORKS Yoon Kim∗ Carl Denton∗ Luong Hoang Alexander M. Rush {yoonkim@seas,carldenton@college,lhoang@g,srush@seas ...","url":["https://arxiv.org/pdf/1702.00887"]}
|
| 258 |
{"year":"2017","title":"Supervised Learning of Universal Sentence Representations from Natural Language Inference Data","authors":["A Conneau, D Kiela, H Schwenk, L Barrault, A Bordes - arXiv preprint arXiv: …, 2017"],"snippet":"... 512 hidden units. We use opensource GloVe vectors trained on Common Crawl 840B2 with 300 dimensions as fixed word embeddings and initialize other word vectors to random values sampled from U(-0.1,0.1). Input sen ...","url":["https://arxiv.org/pdf/1705.02364"]}
|
2018.jsonl
CHANGED
|
@@ -90,7 +90,7 @@
|
|
| 90 |
{"year":"2018","title":"AUTOMATIC PATTERN RECOGNITION IN CONVERSATIONS","authors":["R Raanani, R Levy, D Facher, MY Breakstone - US Patent App. 15/817,490, 2018"],"snippet":"… language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the i Insidesales.com “Market size 2013” study availability of large, freely …","url":["http://www.freepatentsonline.com/y2018/0077286.html"]}
|
| 91 |
{"year":"2018","title":"Automatic Post-Editing of Machine Translation: A Neural Programmer-Interpreter Approach","authors":["TT Vu, G Haffari - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"… Interestingly, training MT+AG and MT+AG+LM models on 23K data lead to better TER/BLEU than those trained on 500K+12K. This implies the importance of in-domain training data, as the synthetic corpus is created …","url":["http://www.aclweb.org/anthology/D18-1341"]}
|
| 92 |
{"year":"2018","title":"Automatic Question Tagging with Deep Neural Networks","authors":["B Sun, Y Zhu, Y Xiao, R Xiao, YG Wei - IEEE Transactions on Learning Technologies, 2018"],"snippet":"… Word2vec [41] trained on a large corpus. Some pre-trained word vectors are available, such as GloVe Common Crawl vectors1 and word2vec vectors2, which is trained on Google News. The word vectors can be divided into …","url":["http://ieeexplore.ieee.org/abstract/document/8295250/"]}
|
| 93 |
-
{"year":"2018","title":"Automatically Categorizing Software Technologies","authors":["M Nassif, C Treude, M Robillard - IEEE Transactions on Software Engineering, 2018","S Khan, WH Butt - 2022 2nd International Conference on Digital Futures …, 2022"],"snippet":"…
|
| 94 |
{"year":"2018","title":"Based Speech Recognition with Gated ConvNets","authors":["V Liptchinsky, G Synnaeve, R Collobert - arXiv preprint arXiv:1712.09444, 2017"],"snippet":"… Extra Resources Panayotov et al. (2015) HMM+DNN+pNorm phone fMLLR phone lexicon Amodei et al. (2016) 2D-CNN+RNN letter none 11.9Kh train set, Common Crawl LM Peddinti et al. (2015b) HMM+CNN phone iVectors phone lexicon Povey et al …","url":["https://arxiv.org/pdf/1712.09444"]}
|
| 95 |
{"year":"2018","title":"Belittling the Source: Trustworthiness Indicators to Obfuscate Fake News on the Web","authors":["D Esteves, AJ Reddy, P Chawla, J Lehmann - arXiv preprint arXiv:1809.00494, 2018"],"snippet":"… Social Tags: returns the frequency of social tags in wb: R ⋃ i=1 ϕ(i, wb) 11. OpenSources: returns the open-source classification (x) for a given website: x = { 1, if w ∈ O 0, if w ∈ O 12. PageRankCC: PageRank information …","url":["https://arxiv.org/pdf/1809.00494"]}
|
| 96 |
{"year":"2018","title":"Bi-Directional Differentiable Input Reconstruction for Low-Resource Neural Machine Translation","authors":["X Niu, W Xu, M Carpuat - arXiv preprint arXiv:1811.01116, 2018"],"snippet":"… data for Swahili↔English (SW↔EN), Tagalog↔English (TL↔EN) and Somali↔English (SO↔EN) contains a mixture of domains such as news and weblogs and is collected from the IARPA MATERIAL program2, the Global …","url":["https://arxiv.org/pdf/1811.01116"]}
|
|
@@ -103,7 +103,7 @@
|
|
| 103 |
{"year":"2018","title":"BlogSet-BR: A Brazilian Portuguese Blog Corpus","authors":["H Santos, V Woloszyn, R Vieira - … of the Eleventh International Conference on …, 2018"],"snippet":"… For instance, the Common Crawl project maintains an open repository of web crawl data that can be accessed and analyzed by any research group2. This corpus has been used to build language models (Roziewski …","url":["http://www.aclweb.org/anthology/L18-1105"]}
|
| 104 |
{"year":"2018","title":"BomJi at SemEval-2018 Task 10: Combining Vector-, Pattern-and Graph-based Information to Identify Discriminative Attributes","authors":["E Santus, C Biemann, E Chersoni"],"snippet":"… 2The pre-trained vectors are available, respectively, at https://code.google.com/archive/ p/ word2vec/ (Google News, 300 dimensions) and at https://nlp.stanford.edu/projects/ glove/ (Common Crawl, 840B tokens, 300 dimensions) …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2018-santusetal-semeval-bomji.pdf"]}
|
| 105 |
{"year":"2018","title":"Bootstrapping Multilingual Intent Models via Machine Translation for Dialog Automation","authors":["N Ruiz, S Bangalore, J Chen - arXiv preprint arXiv:1805.04453, 2018"],"snippet":"… The NMT models were trained with parallel English-Spanish data from Europarl v7, CommonCrawl, and WMT News Commentary v8 from the WMT 2013 evaluation campaign (Bojar et al., 2013), as well as the TED talks from IWSLT 2014 (Cettolo et al., 2014) …","url":["https://arxiv.org/pdf/1805.04453"]}
|
| 106 |
-
{"year":"2018","title":"Bringing Order to Neural Word Embeddings with Embeddings Augmented by Random Permutations (EARP)","authors":["A Sharp","T Cohen, D Widdows - Proceedings of the 22nd Conference on Computational …, 2018"],"snippet":"
|
| 107 |
{"year":"2018","title":"Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis","authors":["A Moore, P Rayson - arXiv preprint arXiv:1806.05219, 2018"],"snippet":"… We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods …","url":["https://arxiv.org/pdf/1806.05219"]}
|
| 108 |
{"year":"2018","title":"C) uestion Answering System with Deep Learning","authors":["JSRMI Schoenhals"],"snippet":"… The dataset contains more than 100k question-answer pairs on more than 500 articles, and it is split into 90k/10k train/dev question-context tuples with hidden test set. 3.2 Word Embeddings We use pre-trained GLoVe embeddings from the 840B Common Crawl corpus …","url":["http://web.stanford.edu/class/cs224n/reports/6908933.pdf"]}
|
| 109 |
{"year":"2018","title":"CAESAR: Context Awareness Enabled Summary-Attentive Reader","authors":["LH Chen, K Tripathi - arXiv preprint arXiv:1803.01335, 2018"],"snippet":"… Aftering experimenting with hyperparameters and various preprocessing settings, we settle on the following experiment details which gave the optimal result. We apply pretrained GloVe word embeddings trained on common","url":["https://arxiv.org/pdf/1803.01335"]}
|
|
@@ -184,7 +184,7 @@
|
|
| 184 |
{"year":"2018","title":"Discriminator at SemEval-2018 Task 10: Minimally Supervised Discrimination","authors":["A Kulmizev, M Abdou, V Ravishankar, M Nissim - Proceedings of The 12th …, 2018"],"snippet":"… The VSM used in our final submission consisted of an av- erage of three sets of embeddings: GloVe word embeddings trained on Common Crawl (840B to- kens) (Pennington et al., 2014), the same GloVe embeddings post …","url":["http://www.aclweb.org/anthology/S18-1167"]}
|
| 185 |
{"year":"2018","title":"Distinguishing attributes using text corpora and relational knowledge","authors":["R Speer, J Lowry-Duda"],"snippet":"… Unicode CLDR emoji data • word2vec, precomputed on Google News • GloVe, precomputed on the Common Crawl • fastText, customized to learn from parallel text, trained on OpenSubtitles 2016 We used the embeddings …","url":["http://blog.conceptnet.io/2018/06/naacl2018-poster.pdf"]}
|
| 186 |
{"year":"2018","title":"Distributed Evaluation of Subgraph Queries Using Worstcase Optimal LowMemory Dataflows","authors":["K Ammar, F McSherry, S Salihoglu, M Joglekar - arXiv preprint arXiv:1802.03760, 2018"],"snippet":"Page 1. Distributed Evaluation of Subgraph Queries Using Worst-case Optimal Low-Memory Dataflows Khaled Ammar†, Frank McSherry‡, Semih Salihoglu†, Manas Joglekar♯ †University of Waterloo, ‡ETH Zürich,♯Google …","url":["https://arxiv.org/pdf/1802.03760"]}
|
| 187 |
-
{"year":"2018","title":"Distributed Representations of Tuples for Entity Resolution","authors":["MESTS Joty, MON Tang - Proceedings of the VLDB Endowment, 2018","MESTS Joty, MON Tang - Proceedings of the VLDB Endowment,(11), 2018"],"snippet":"…
|
| 188 |
{"year":"2018","title":"DL Team at SemEval-2018 Task 1: Tweet Affect Detection using Sentiment Lexicons and Embeddings","authors":["D Kravchenko, L Pivovarova - Proceedings of The 12th International Workshop on …, 2018"],"snippet":"… We use the following two models: 1. Common Crawl: 300-dimensional vectors trained on huge Internet corpus of 840 billion tokens and 2.2 million distinct words … GloVe Common Crawl 46.93 53.98 43.66 56.31 66.38 59.26 Google …","url":["http://www.aclweb.org/anthology/S18-1025"]}
|
| 189 |
{"year":"2018","title":"DMCB at SemEval-2018 Task 1: Transfer Learning of Sentiment Classification Using Group LSTM for Emotion Intensity prediction","authors":["Y Kim, H Lee - Proceedings of The 12th International Workshop on …, 2018"],"snippet":"… We try five pre-trained word embeddings to choose the best one for the target model. Two are trained with GloVe (Pennington et al., 2014) using different data sets: one1 is trained with very large data in Common crawl, and the …","url":["http://www.aclweb.org/anthology/S18-1044"]}
|
| 190 |
{"year":"2018","title":"Domain Adapted Word Embeddings for Improved Sentiment Classification","authors":["PK Sarma, YI Liang, WA Sethares - arXiv preprint arXiv:1805.04576, 2018","PKSYL William, A Sethares"],"snippet":"… Word embedding Dimension GloVe 100 word2vec 300 LSA 70 CCA-DA 68 KCCA-DA 68 GloVe common crawl 300 AdaptGloVe 300 … WG ∈ R|VG|×d2 be the matrix of generic word embeddings (obtained by, eg, running …","url":["https://ar5iv.labs.arxiv.org/html/1805.04576","https://arxiv.org/pdf/1805.04576"]}
|
|
@@ -254,7 +254,7 @@
|
|
| 254 |
{"year":"2018","title":"Human versus automatic quality evaluation of NMT and PBSMT","authors":["D Shterionov, R Superbo, P Nagle, L Casanellas… - Machine Translation, 2018"],"snippet":"… 2015) is built with TED talks, EPPS, news commentary,8 and Common Crawl data; the NMT system they compare to (Luong and Manning 2015) is a pre-trained NMT system that was further improved with data provided by the IWSLT2015 organizers …","url":["https://link.springer.com/article/10.1007/s10590-018-9220-z"]}
|
| 255 |
{"year":"2018","title":"Hybrid Self-Attention Network for Machine Translation","authors":["K Song, T Xu, F Peng, J Lu - arXiv preprint arXiv:1811.00253, 2018"],"snippet":"… 2016) shared vocabulary. WMT14 English-German WMT14 EnglishGerman dataset (Buck, Heafield, and van Ooyen 2014) comprises about 4.5 million sentence pairs that are extracted from three corpora: Common …","url":["https://arxiv.org/pdf/1811.00253"]}
|
| 256 |
{"year":"2018","title":"Hypothesis Only Baselines in Natural Language Inference","authors":["A Poliak, J Naradowsky, A Haldar, R Rudinger… - arXiv preprint arXiv …, 2018"],"snippet":"… Following Conneau et al. (2017), we map the resulting to- kens to 300-dimensional GloVe vectors (Pennington et al., 2014) trained on 840 billion tokens from the Common Crawl, using the GloVe OOV vector for unknown words …","url":["https://arxiv.org/pdf/1805.01042"]}
|
| 257 |
-
{"year":"2018","title":"Identifying Semantic Divergences in Parallel Text without Annotations","authors":["Y Vyas, X Niu, M Carpuat - arXiv preprint arXiv:1803.11112, 2018","YVXNM Carpuat"],"snippet":"…
|
| 258 |
{"year":"2018","title":"Identifying the Most Effective Feature Category in Machine Learning-based Phishing Website Detection","authors":["CL Tan, KL Chiew, N Musa, DHA Ibrahim - International Journal of Engineering & …, 2018"],"snippet":"… [31] “Common Crawl”, available online: http://commoncrawl.org/, last visit: 10.01.2017. [32] Selenium Project (2017), “Selenium WebDriver”, available online: http://www.seleniumhq.org/projects/webdriver/, last visit: 10.01.2017 …","url":["https://www.researchgate.net/profile/Choon_Lin_Tan/publication/329554643_Identifying_the_Most_Effective_Feature_Category_in_Machine_Learning-based_Phishing_Website_Detection/links/5c0f183e299bf139c74fb929/Identifying-the-Most-Effective-Feature-Category-in-Machine-Learning-based-Phishing-Website-Detection.pdf"]}
|
| 259 |
{"year":"2018","title":"Improved Text Analytics Transfer Learning","authors":["M Riemer, E Khabiri, R Goodwin - 2018"],"snippet":"… Our GRU model was fed a sequence of fixed 300 dimensional Glove vectors (Pennington et al., 2014), representing words based on analysis of 840 billion words from a common crawl of the internet, as the input xt for all tasks …","url":["https://openreview.net/pdf?id=HyggjiiMz6m"]}
|
| 260 |
{"year":"2018","title":"Improving Cross-Lingual Word Embeddings by Meeting in the Middle","authors":["Y Doval, J Camacho-Collados, L Espinosa-Anke… - arXiv preprint arXiv …, 2018"],"snippet":"… For Italian and German, we use the itWaC and sdeWaC corpora from the WaCky project (Baroni et al., 2009), containing 2 and 0.8 billion words, respectively.2 Lastly, for Finnish, we use the Common Crawl monolingual …","url":["https://arxiv.org/pdf/1808.08780"]}
|
|
@@ -267,7 +267,7 @@
|
|
| 267 |
{"year":"2018","title":"Incorporating Statistical Machine Translation Word Knowledge into Neural Machine Translation","authors":["X Wang, Z Tu, M Zhang - IEEE/ACM Transactions on Audio, Speech, and …, 2018"],"snippet":"Page 1. 2329-9290 (c) 2018 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/8421063/"]}
|
| 268 |
{"year":"2018","title":"Incorporating the Structure of the Belief State in End-to-End Task-Oriented Dialogue Systems","authors":["L Shu, P Molino, M Namazifar, B Liu, H Xu, H Zheng…"],"snippet":"Page 1. Incorporating the Structure of the Belief State in End-to-End Task-Oriented Dialogue Systems Lei Shu∗1, Piero Molino2, Mahdi Namazifar3, Bing Liu1, Hu Xu1, Huaixiu Zheng3, and Gokhan Tur3 1University …","url":["http://alborz-geramifard.com/workshops/nips18-Conversational-AI/Papers/18convai-Incorporating%20the%20Structure.pdf"]}
|
| 269 |
{"year":"2018","title":"Inducing Grammars with and for Neural Machine Translation","authors":["K Tran, Y Bisk - arXiv preprint arXiv:1805.10850, 2018","Y Bisk, K Tran - Proceedings of the 2nd Workshop on Neural Machine …, 2018"],"snippet":"… Table 1 shows the statistics of the data. For En↔De, we use a concatenation of Europarl, Common Crawl, Rapid corpus of EU press releases, and News Commentary v12 … For En↔Ru, we use Common Crawl, News Commentary v12, and Yandex Corpus …","url":["http://www.aclweb.org/anthology/W18-2704","https://arxiv.org/pdf/1805.10850"]}
|
| 270 |
-
{"year":"2018","title":"Inducing Implicit Relations from Text Using Distantly Supervised Deep Nets","authors":["M Glass, A Gliozzo, O Hassanzadeh… - International Semantic Web …, 2018"],"snippet":"… We
|
| 271 |
{"year":"2018","title":"InferLite: Simple Universal Sentence Representations from Natural Language Inference Data","authors":["J Kiros, W Chan - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"… (2018) describe a contextual gating Feature dataset dim method Glove Common Crawl 300 News Google News 500 CBOW Query Google Search 800 CBOW Table 1: Comparison of word representations used. method for word embedding selection …","url":["http://www.aclweb.org/anthology/D18-1524"]}
|
| 272 |
{"year":"2018","title":"Inferring gender of Reddit users","authors":["E Vasilev - 2018"],"snippet":"Page 1. People and Knowledge Networks WeST Fachbereich 4: Informatik Institute for Web Science and Technologies Inferring gender of Reddit users Masterarbeit zur Erlangung des Grades einer Master of Science (M.Sc.) im Studiengang Web Science …","url":["https://kola.opus.hbz-nrw.de/files/1619/Master_thesis_Vasilev.pdf"]}
|
| 273 |
{"year":"2018","title":"Inferring Missing Categorical Information in Noisy and Sparse Web Markup","authors":["N Tempelmeier, E Demidova, S Dietze - arXiv preprint arXiv:1803.00446, 2018"],"snippet":"… information. For instance, from 26 million nodes describing events within the Common Crawl in 2016, 59% of nodes provide less than six statements and only 257,000 nodes (0.96%) are typed with more specific event subtypes …","url":["https://arxiv.org/pdf/1803.00446"]}
|
|
@@ -282,7 +282,7 @@
|
|
| 282 |
{"year":"2018","title":"LA3: A Scalable Link-and Locality-Aware Linear Algebra-Based Graph Analytics System","authors":["Y Ahmad, O Khattab, A Malik, A Musleh, M Hammoud… - Proceedings of the VLDB …, 2018"],"snippet":"Page 1. LA3: A Scalable Linkand Locality-Aware Linear Algebra-Based Graph Analytics System Yousuf Ahmad, Omar Khattab, Arsal Malik, Ahmad Musleh, Mohammad Hammoud Carnegie Mellon University in Qatar 1myahmad …","url":["http://www.vldb.org/pvldb/vol11/p920-ahmad.pdf"]}
|
| 283 |
{"year":"2018","title":"Language Modeling at Scale","authors":["M Patwary, M Chabbi, H Jun, J Huang, G Diamos… - arXiv preprint arXiv …, 2018"],"snippet":"… The figure shows four datasets: 1-Billion word [6] (1b), Gutenberg [7] (gb), Common crawl [8] (cc), and Amazon review [9] (ar) … The 4 lines correspond to 4 datasets: one Billion word (1b), Gutenberg (gb), Common Crawl (cc), and Amazon Review (ar). TABLE I DATASETS …","url":["https://arxiv.org/pdf/1810.10045"]}
|
| 284 |
{"year":"2018","title":"Language use shapes cultural norms: Large scale evidence from gender","authors":["M Lewis, G Lupyan"],"snippet":"… Results Figure 2 shows the effect size measures derived from the English Wikipedia corpus plotted against effect size estimates reported by CBN from two different models (trained on the Common Crawl and Google News corpora) … model Common Crawl (GloVe) …","url":["http://home.uchicago.edu/~mollylewis/papers/gender_cogsci_2018.pdf"]}
|
| 285 |
-
{"year":"2018","title":"Large scale distributed neural network training through online distillation","authors":["AT Passos, G Pereyra, G Hinton, G Dahl, R Ormandi… - 2018","R Anil, G Pereyra, A Passos, R Ormandi, GE Dahl… - arXiv preprint arXiv …, 2018"],"snippet":"…
|
| 286 |
{"year":"2018","title":"Large-Scale Analysis of Style Injection by Relative Path Overwrite","authors":["S Arshad, SA Mirheidari, T Lauinger, B Crispo, E Kirda… - 2018"],"snippet":"… using RPO. We extract pages using relativepath stylesheets from the Common Crawl dataset [9], automatically test if style directives can be injected using RPO, and determine whether they are interpreted by the browser. Out …","url":["http://www.ccs.neu.edu/home/arshad/publications/www2018rpo.pdf"]}
|
| 287 |
{"year":"2018","title":"Latent Question Interpretation Through Parameter Adaptation Using Stochastic Neuron","authors":["T Parshakova, DS Kim"],"snippet":"… In effect, it led to maximally effective behaviour in the question-answering task. 6 Experiments Implementation Details For the word embeddings we use GloVe embeddings pretrained on the 840B Common Crawl corpus [Pennington et al., 2014] …","url":["http://ceur-ws.org/Vol-2134/paper07.pdf"]}
|
| 288 |
{"year":"2018","title":"Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings","authors":["K Al-Sabahi, Z Zuping, Y Kang - arXiv preprint arXiv:1807.02748, 2018"],"snippet":"… Training is performed on aggregated global word-word co-occurrence statistics from a corpus. In this work, we use the one trained on Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download): glove.840B.300d.zip. 3.2. LSA algorithm …","url":["https://arxiv.org/pdf/1807.02748"]}
|
|
@@ -339,7 +339,7 @@
|
|
| 339 |
{"year":"2018","title":"Multi-turn QA: A RNN Contextual Approach to Intent Classification for Goal-oriented Systems","authors":["M Mensio, G Rizzo, M Morisio - Companion of the The Web Conference 2018 on The …, 2018"],"snippet":"… 8. Christian Buck, Kenneth Heafield, and Bas van Ooyen. 2014. N-gram Counts and Language Models from the Common Crawl Proceedings of the Language Resources and Evaluation Conference (LREC), Vol. Vol. 2. Citeseer, Reykjavik, Iceland, 4 …","url":["https://dl.acm.org/citation.cfm?id=3191539"]}
|
| 340 |
{"year":"2018","title":"Multilingual word embeddings and their utility in cross-lingual learning","authors":["A Kulmizev - 2018"],"snippet":"… language corpus. When said corpus is large enough (eg Wikipedia, Common Crawl, or the concatenation of the two), the resulting DSM can be assumed to represent the distributional semantics of an entire language. The algorithms …","url":["https://addi.ehu.es/bitstream/handle/10810/29083/TFM_Artur_Kulmizev.pdf?sequence=1"]}
|
| 341 |
{"year":"2018","title":"Multimodal Language Analysis with Recurrent Multistage Fusion","authors":["PP Liang, Z Liu, A Zadeh, LP Morency - arXiv preprint arXiv:1808.03920, 2018"],"snippet":"Page 1. Multimodal Language Analysis with Recurrent Multistage Fusion Paul Pu Liang1, Ziyin Liu2, Amir Zadeh2, Louis-Philippe Morency2 1Machine Learning Department, 2Language Technologies Institute Carnegie Mellon …","url":["https://arxiv.org/pdf/1808.03920"]}
|
| 342 |
-
{"year":"2018","title":"Multimodal Language Analysis with Recurrent Multistage Fusion: Supplementary Material","authors":["PP Liang, Z Liu, A Zadeh, LP Morency"],"snippet":"… 1.1 Multimodal Features Here we present extra details on feature extraction for the language, visual and acoustic modalities. Language: We used 300 dimensional Glove word embeddings trained on 840 billion tokens from …","url":["http://www.cs.cmu.edu/~pliang/papers/emnlp2018-recurrent-fusion-supp.pdf"]}
|
| 343 |
{"year":"2018","title":"Natural language processing using a neural network","authors":["B McCann, C Xiong, R Socher - US Patent App. 16/000,638, 2018"],"snippet":"… in the second language. In some examples, training of an MT-LSTM of the encoder 310 uses fixed 300-dimensional word vectors, such as the CommonCrawl-840B GloVe model for English word vectors. These word vectors …","url":["https://patentimages.storage.googleapis.com/f0/42/0e/084fa3f0799a39/US20180349359A1.pdf"]}
|
| 344 |
{"year":"2018","title":"Navigating Online Semantic Resources for Entity Set Expansion","authors":["WT Adrian, M Manna - International Symposium on Practical Aspects of …, 2018"],"snippet":"… the given entry (synset) in BabelNet. WebIsADatabase [25] is a publicly available database containing more than 400 million hypernymy relations extracted from the CommonCrawl web corpus. The tuples of the database are …","url":["https://link.springer.com/chapter/10.1007/978-3-319-73305-0_12"]}
|
| 345 |
{"year":"2018","title":"Near Human-Level Performance in Grammatical Error Correction with Hybrid Machine Translation","authors":["R Grundkiewicz, M Junczys-Dowmunt - arXiv preprint arXiv:1804.05945, 2018"],"snippet":"… ops). All systems use a 5-gram Language Model (LM) and OSM (Durrani et al., 2011) both estimated from the target side of the training data, and a 5-gram LM and 9-gram WCLM trained on Common Crawl data (Buck et al., 2014) …","url":["https://arxiv.org/pdf/1804.05945"]}
|
|
|
|
| 90 |
{"year":"2018","title":"AUTOMATIC PATTERN RECOGNITION IN CONVERSATIONS","authors":["R Raanani, R Levy, D Facher, MY Breakstone - US Patent App. 15/817,490, 2018"],"snippet":"… language processing (NLP) approaches to both topic modeling and world-knowledge modeling, have become much more efficient due to the i Insidesales.com “Market size 2013” study availability of large, freely …","url":["http://www.freepatentsonline.com/y2018/0077286.html"]}
|
| 91 |
{"year":"2018","title":"Automatic Post-Editing of Machine Translation: A Neural Programmer-Interpreter Approach","authors":["TT Vu, G Haffari - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"… Interestingly, training MT+AG and MT+AG+LM models on 23K data lead to better TER/BLEU than those trained on 500K+12K. This implies the importance of in-domain training data, as the synthetic corpus is created …","url":["http://www.aclweb.org/anthology/D18-1341"]}
|
| 92 |
{"year":"2018","title":"Automatic Question Tagging with Deep Neural Networks","authors":["B Sun, Y Zhu, Y Xiao, R Xiao, YG Wei - IEEE Transactions on Learning Technologies, 2018"],"snippet":"… Word2vec [41] trained on a large corpus. Some pre-trained word vectors are available, such as GloVe Common Crawl vectors1 and word2vec vectors2, which is trained on Google News. The word vectors can be divided into …","url":["http://ieeexplore.ieee.org/abstract/document/8295250/"]}
|
| 93 |
+
{"year":"2018","title":"Automatically Categorizing Software Technologies","authors":["M Nassif, C Treude, M Robillard - IEEE Transactions on Software Engineering, 2018","S Khan, WH Butt - 2022 2nd International Conference on Digital Futures …, 2022"],"snippet":"… sophisticated and extensive grammatical patterns (similar to the Hearst pattern) to the large network document corpus common crawl. Additionally, to find hypernyms, WebIsADb too practices pre-adjusters and post-modifiers. This concept is …","url":["https://ieeexplore.ieee.org/abstract/document/8359344/","https://ieeexplore.ieee.org/abstract/document/9787457/"]}
|
| 94 |
{"year":"2018","title":"Based Speech Recognition with Gated ConvNets","authors":["V Liptchinsky, G Synnaeve, R Collobert - arXiv preprint arXiv:1712.09444, 2017"],"snippet":"… Extra Resources Panayotov et al. (2015) HMM+DNN+pNorm phone fMLLR phone lexicon Amodei et al. (2016) 2D-CNN+RNN letter none 11.9Kh train set, Common Crawl LM Peddinti et al. (2015b) HMM+CNN phone iVectors phone lexicon Povey et al …","url":["https://arxiv.org/pdf/1712.09444"]}
|
| 95 |
{"year":"2018","title":"Belittling the Source: Trustworthiness Indicators to Obfuscate Fake News on the Web","authors":["D Esteves, AJ Reddy, P Chawla, J Lehmann - arXiv preprint arXiv:1809.00494, 2018"],"snippet":"… Social Tags: returns the frequency of social tags in wb: R ⋃ i=1 ϕ(i, wb) 11. OpenSources: returns the open-source classification (x) for a given website: x = { 1, if w ∈ O 0, if w ∈ O 12. PageRankCC: PageRank information …","url":["https://arxiv.org/pdf/1809.00494"]}
|
| 96 |
{"year":"2018","title":"Bi-Directional Differentiable Input Reconstruction for Low-Resource Neural Machine Translation","authors":["X Niu, W Xu, M Carpuat - arXiv preprint arXiv:1811.01116, 2018"],"snippet":"… data for Swahili↔English (SW↔EN), Tagalog↔English (TL↔EN) and Somali↔English (SO↔EN) contains a mixture of domains such as news and weblogs and is collected from the IARPA MATERIAL program2, the Global …","url":["https://arxiv.org/pdf/1811.01116"]}
|
|
|
|
| 103 |
{"year":"2018","title":"BlogSet-BR: A Brazilian Portuguese Blog Corpus","authors":["H Santos, V Woloszyn, R Vieira - … of the Eleventh International Conference on …, 2018"],"snippet":"… For instance, the Common Crawl project maintains an open repository of web crawl data that can be accessed and analyzed by any research group2. This corpus has been used to build language models (Roziewski …","url":["http://www.aclweb.org/anthology/L18-1105"]}
|
| 104 |
{"year":"2018","title":"BomJi at SemEval-2018 Task 10: Combining Vector-, Pattern-and Graph-based Information to Identify Discriminative Attributes","authors":["E Santus, C Biemann, E Chersoni"],"snippet":"… 2The pre-trained vectors are available, respectively, at https://code.google.com/archive/ p/ word2vec/ (Google News, 300 dimensions) and at https://nlp.stanford.edu/projects/ glove/ (Common Crawl, 840B tokens, 300 dimensions) …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/publications/2018-santusetal-semeval-bomji.pdf"]}
|
| 105 |
{"year":"2018","title":"Bootstrapping Multilingual Intent Models via Machine Translation for Dialog Automation","authors":["N Ruiz, S Bangalore, J Chen - arXiv preprint arXiv:1805.04453, 2018"],"snippet":"… The NMT models were trained with parallel English-Spanish data from Europarl v7, CommonCrawl, and WMT News Commentary v8 from the WMT 2013 evaluation campaign (Bojar et al., 2013), as well as the TED talks from IWSLT 2014 (Cettolo et al., 2014) …","url":["https://arxiv.org/pdf/1805.04453"]}
|
| 106 |
+
{"year":"2018","title":"Bringing Order to Neural Word Embeddings with Embeddings Augmented by Random Permutations (EARP)","authors":["A Sharp","T Cohen, D Widdows - Proceedings of the 22nd Conference on Computational …, 2018"],"snippet":"… ) report a best accuracy of 69.3% after training Glove on a corpus of 42 billion words, and Mikolov and colleagues (2017) report an accuracy of 73% when training a subword-sensitive CBOW model for five iterations across a 630 billion word corpus …","url":["http://www.aclweb.org/anthology/K18-1045","https://zdoc.pub/bringing-order-to-neural-word-embeddings-with-embeddings-aug.html"]}
|
| 107 |
{"year":"2018","title":"Bringing replication and reproduction together with generalisability in NLP: Three reproduction studies for Target Dependent Sentiment Analysis","authors":["A Moore, P Rayson - arXiv preprint arXiv:1806.05219, 2018"],"snippet":"… We found the best word vectors from SSWE and the common crawl 42B 300 dimension Glove vectors by five fold stratified cross validation for the NP methods and the highest accuracy on the validation set for the LSTM methods …","url":["https://arxiv.org/pdf/1806.05219"]}
|
| 108 |
{"year":"2018","title":"C) uestion Answering System with Deep Learning","authors":["JSRMI Schoenhals"],"snippet":"… The dataset contains more than 100k question-answer pairs on more than 500 articles, and it is split into 90k/10k train/dev question-context tuples with hidden test set. 3.2 Word Embeddings We use pre-trained GLoVe embeddings from the 840B Common Crawl corpus …","url":["http://web.stanford.edu/class/cs224n/reports/6908933.pdf"]}
|
| 109 |
{"year":"2018","title":"CAESAR: Context Awareness Enabled Summary-Attentive Reader","authors":["LH Chen, K Tripathi - arXiv preprint arXiv:1803.01335, 2018"],"snippet":"… Aftering experimenting with hyperparameters and various preprocessing settings, we settle on the following experiment details which gave the optimal result. We apply pretrained GloVe word embeddings trained on common","url":["https://arxiv.org/pdf/1803.01335"]}
|
|
|
|
| 184 |
{"year":"2018","title":"Discriminator at SemEval-2018 Task 10: Minimally Supervised Discrimination","authors":["A Kulmizev, M Abdou, V Ravishankar, M Nissim - Proceedings of The 12th …, 2018"],"snippet":"… The VSM used in our final submission consisted of an av- erage of three sets of embeddings: GloVe word embeddings trained on Common Crawl (840B to- kens) (Pennington et al., 2014), the same GloVe embeddings post …","url":["http://www.aclweb.org/anthology/S18-1167"]}
|
| 185 |
{"year":"2018","title":"Distinguishing attributes using text corpora and relational knowledge","authors":["R Speer, J Lowry-Duda"],"snippet":"… Unicode CLDR emoji data • word2vec, precomputed on Google News • GloVe, precomputed on the Common Crawl • fastText, customized to learn from parallel text, trained on OpenSubtitles 2016 We used the embeddings …","url":["http://blog.conceptnet.io/2018/06/naacl2018-poster.pdf"]}
|
| 186 |
{"year":"2018","title":"Distributed Evaluation of Subgraph Queries Using Worstcase Optimal LowMemory Dataflows","authors":["K Ammar, F McSherry, S Salihoglu, M Joglekar - arXiv preprint arXiv:1802.03760, 2018"],"snippet":"Page 1. Distributed Evaluation of Subgraph Queries Using Worst-case Optimal Low-Memory Dataflows Khaled Ammar†, Frank McSherry‡, Semih Salihoglu†, Manas Joglekar♯ †University of Waterloo, ‡ETH Zürich,♯Google …","url":["https://arxiv.org/pdf/1802.03760"]}
|
| 187 |
+
{"year":"2018","title":"Distributed Representations of Tuples for Entity Resolution","authors":["MESTS Joty, MON Tang - Proceedings of the VLDB Endowment, 2018","MESTS Joty, MON Tang - Proceedings of the VLDB Endowment,(11), 2018"],"snippet":"… For example, the popular GloVe dictionary is trained on the Common Crawl corpus, which is almost 2 TB requiring exorbitant computing re- sources … GloVe and word2vec learned the word embeddings by training on a large …","url":["http://da.qcri.org/ntang/pubs/vldb18-deeper.pdf","https://pdfs.semanticscholar.org/334e/9eb88738671a5c9a53dea174586e885ec00b.pdf"]}
|
| 188 |
{"year":"2018","title":"DL Team at SemEval-2018 Task 1: Tweet Affect Detection using Sentiment Lexicons and Embeddings","authors":["D Kravchenko, L Pivovarova - Proceedings of The 12th International Workshop on …, 2018"],"snippet":"… We use the following two models: 1. Common Crawl: 300-dimensional vectors trained on huge Internet corpus of 840 billion tokens and 2.2 million distinct words … GloVe Common Crawl 46.93 53.98 43.66 56.31 66.38 59.26 Google …","url":["http://www.aclweb.org/anthology/S18-1025"]}
|
| 189 |
{"year":"2018","title":"DMCB at SemEval-2018 Task 1: Transfer Learning of Sentiment Classification Using Group LSTM for Emotion Intensity prediction","authors":["Y Kim, H Lee - Proceedings of The 12th International Workshop on …, 2018"],"snippet":"… We try five pre-trained word embeddings to choose the best one for the target model. Two are trained with GloVe (Pennington et al., 2014) using different data sets: one1 is trained with very large data in Common crawl, and the …","url":["http://www.aclweb.org/anthology/S18-1044"]}
|
| 190 |
{"year":"2018","title":"Domain Adapted Word Embeddings for Improved Sentiment Classification","authors":["PK Sarma, YI Liang, WA Sethares - arXiv preprint arXiv:1805.04576, 2018","PKSYL William, A Sethares"],"snippet":"… Word embedding Dimension GloVe 100 word2vec 300 LSA 70 CCA-DA 68 KCCA-DA 68 GloVe common crawl 300 AdaptGloVe 300 … WG ∈ R|VG|×d2 be the matrix of generic word embeddings (obtained by, eg, running …","url":["https://ar5iv.labs.arxiv.org/html/1805.04576","https://arxiv.org/pdf/1805.04576"]}
|
|
|
|
| 254 |
{"year":"2018","title":"Human versus automatic quality evaluation of NMT and PBSMT","authors":["D Shterionov, R Superbo, P Nagle, L Casanellas… - Machine Translation, 2018"],"snippet":"… 2015) is built with TED talks, EPPS, news commentary,8 and Common Crawl data; the NMT system they compare to (Luong and Manning 2015) is a pre-trained NMT system that was further improved with data provided by the IWSLT2015 organizers …","url":["https://link.springer.com/article/10.1007/s10590-018-9220-z"]}
|
| 255 |
{"year":"2018","title":"Hybrid Self-Attention Network for Machine Translation","authors":["K Song, T Xu, F Peng, J Lu - arXiv preprint arXiv:1811.00253, 2018"],"snippet":"… 2016) shared vocabulary. WMT14 English-German WMT14 EnglishGerman dataset (Buck, Heafield, and van Ooyen 2014) comprises about 4.5 million sentence pairs that are extracted from three corpora: Common …","url":["https://arxiv.org/pdf/1811.00253"]}
|
| 256 |
{"year":"2018","title":"Hypothesis Only Baselines in Natural Language Inference","authors":["A Poliak, J Naradowsky, A Haldar, R Rudinger… - arXiv preprint arXiv …, 2018"],"snippet":"… Following Conneau et al. (2017), we map the resulting to- kens to 300-dimensional GloVe vectors (Pennington et al., 2014) trained on 840 billion tokens from the Common Crawl, using the GloVe OOV vector for unknown words …","url":["https://arxiv.org/pdf/1805.01042"]}
|
| 257 |
+
{"year":"2018","title":"Identifying Semantic Divergences in Parallel Text without Annotations","authors":["Y Vyas, X Niu, M Carpuat - arXiv preprint arXiv:1803.11112, 2018","YVXNM Carpuat"],"snippet":"… Fleiss' Kappa indicates moderate agreement between annotators (0.41 for OpenSubtitles and 0.49 for Common Crawl) … classifier which uses external resources, all models are trained on the exact same parallel corpora …","url":["https://arxiv.org/pdf/1803.11112","https://deeplearn.org/arxiv/30360/identifying-semantic-divergences-in-parallel-text-without-annotations"]}
|
| 258 |
{"year":"2018","title":"Identifying the Most Effective Feature Category in Machine Learning-based Phishing Website Detection","authors":["CL Tan, KL Chiew, N Musa, DHA Ibrahim - International Journal of Engineering & …, 2018"],"snippet":"… [31] “Common Crawl”, available online: http://commoncrawl.org/, last visit: 10.01.2017. [32] Selenium Project (2017), “Selenium WebDriver”, available online: http://www.seleniumhq.org/projects/webdriver/, last visit: 10.01.2017 …","url":["https://www.researchgate.net/profile/Choon_Lin_Tan/publication/329554643_Identifying_the_Most_Effective_Feature_Category_in_Machine_Learning-based_Phishing_Website_Detection/links/5c0f183e299bf139c74fb929/Identifying-the-Most-Effective-Feature-Category-in-Machine-Learning-based-Phishing-Website-Detection.pdf"]}
|
| 259 |
{"year":"2018","title":"Improved Text Analytics Transfer Learning","authors":["M Riemer, E Khabiri, R Goodwin - 2018"],"snippet":"… Our GRU model was fed a sequence of fixed 300 dimensional Glove vectors (Pennington et al., 2014), representing words based on analysis of 840 billion words from a common crawl of the internet, as the input xt for all tasks …","url":["https://openreview.net/pdf?id=HyggjiiMz6m"]}
|
| 260 |
{"year":"2018","title":"Improving Cross-Lingual Word Embeddings by Meeting in the Middle","authors":["Y Doval, J Camacho-Collados, L Espinosa-Anke… - arXiv preprint arXiv …, 2018"],"snippet":"… For Italian and German, we use the itWaC and sdeWaC corpora from the WaCky project (Baroni et al., 2009), containing 2 and 0.8 billion words, respectively.2 Lastly, for Finnish, we use the Common Crawl monolingual …","url":["https://arxiv.org/pdf/1808.08780"]}
|
|
|
|
| 267 |
{"year":"2018","title":"Incorporating Statistical Machine Translation Word Knowledge into Neural Machine Translation","authors":["X Wang, Z Tu, M Zhang - IEEE/ACM Transactions on Audio, Speech, and …, 2018"],"snippet":"Page 1. 2329-9290 (c) 2018 IEEE. Personal use is permitted, but republication/ redistribution requires IEEE permission. See http://www.ieee.org/ publications_standards/publications/rights/index.html for more information. This …","url":["https://ieeexplore.ieee.org/abstract/document/8421063/"]}
|
| 268 |
{"year":"2018","title":"Incorporating the Structure of the Belief State in End-to-End Task-Oriented Dialogue Systems","authors":["L Shu, P Molino, M Namazifar, B Liu, H Xu, H Zheng…"],"snippet":"Page 1. Incorporating the Structure of the Belief State in End-to-End Task-Oriented Dialogue Systems Lei Shu∗1, Piero Molino2, Mahdi Namazifar3, Bing Liu1, Hu Xu1, Huaixiu Zheng3, and Gokhan Tur3 1University …","url":["http://alborz-geramifard.com/workshops/nips18-Conversational-AI/Papers/18convai-Incorporating%20the%20Structure.pdf"]}
|
| 269 |
{"year":"2018","title":"Inducing Grammars with and for Neural Machine Translation","authors":["K Tran, Y Bisk - arXiv preprint arXiv:1805.10850, 2018","Y Bisk, K Tran - Proceedings of the 2nd Workshop on Neural Machine …, 2018"],"snippet":"… Table 1 shows the statistics of the data. For En↔De, we use a concatenation of Europarl, Common Crawl, Rapid corpus of EU press releases, and News Commentary v12 … For En↔Ru, we use Common Crawl, News Commentary v12, and Yandex Corpus …","url":["http://www.aclweb.org/anthology/W18-2704","https://arxiv.org/pdf/1805.10850"]}
|
| 270 |
+
{"year":"2018","title":"Inducing Implicit Relations from Text Using Distantly Supervised Deep Nets","authors":["M Glass, A Gliozzo, O Hassanzadeh… - International Semantic Web …, 2018","N Mihindukulasooriya, G Rossiello"],"snippet":"… We did not identify TODs in common crawl, so we do not use composite contexts for this task. We combine the output of the two systems by, for each triple, taking the highest confidence from each system. We also ran the PCNN+ATT model of NRE on …","url":["https://link.springer.com/chapter/10.1007/978-3-030-00671-6_3","https://www.academia.edu/download/117530733/978-3-030-00671-6_3.pdf"]}
|
| 271 |
{"year":"2018","title":"InferLite: Simple Universal Sentence Representations from Natural Language Inference Data","authors":["J Kiros, W Chan - Proceedings of the 2018 Conference on Empirical …, 2018"],"snippet":"… (2018) describe a contextual gating Feature dataset dim method Glove Common Crawl 300 News Google News 500 CBOW Query Google Search 800 CBOW Table 1: Comparison of word representations used. method for word embedding selection …","url":["http://www.aclweb.org/anthology/D18-1524"]}
|
| 272 |
{"year":"2018","title":"Inferring gender of Reddit users","authors":["E Vasilev - 2018"],"snippet":"Page 1. People and Knowledge Networks WeST Fachbereich 4: Informatik Institute for Web Science and Technologies Inferring gender of Reddit users Masterarbeit zur Erlangung des Grades einer Master of Science (M.Sc.) im Studiengang Web Science …","url":["https://kola.opus.hbz-nrw.de/files/1619/Master_thesis_Vasilev.pdf"]}
|
| 273 |
{"year":"2018","title":"Inferring Missing Categorical Information in Noisy and Sparse Web Markup","authors":["N Tempelmeier, E Demidova, S Dietze - arXiv preprint arXiv:1803.00446, 2018"],"snippet":"… information. For instance, from 26 million nodes describing events within the Common Crawl in 2016, 59% of nodes provide less than six statements and only 257,000 nodes (0.96%) are typed with more specific event subtypes …","url":["https://arxiv.org/pdf/1803.00446"]}
|
|
|
|
| 282 |
{"year":"2018","title":"LA3: A Scalable Link-and Locality-Aware Linear Algebra-Based Graph Analytics System","authors":["Y Ahmad, O Khattab, A Malik, A Musleh, M Hammoud… - Proceedings of the VLDB …, 2018"],"snippet":"Page 1. LA3: A Scalable Linkand Locality-Aware Linear Algebra-Based Graph Analytics System Yousuf Ahmad, Omar Khattab, Arsal Malik, Ahmad Musleh, Mohammad Hammoud Carnegie Mellon University in Qatar 1myahmad …","url":["http://www.vldb.org/pvldb/vol11/p920-ahmad.pdf"]}
|
| 283 |
{"year":"2018","title":"Language Modeling at Scale","authors":["M Patwary, M Chabbi, H Jun, J Huang, G Diamos… - arXiv preprint arXiv …, 2018"],"snippet":"… The figure shows four datasets: 1-Billion word [6] (1b), Gutenberg [7] (gb), Common crawl [8] (cc), and Amazon review [9] (ar) … The 4 lines correspond to 4 datasets: one Billion word (1b), Gutenberg (gb), Common Crawl (cc), and Amazon Review (ar). TABLE I DATASETS …","url":["https://arxiv.org/pdf/1810.10045"]}
|
| 284 |
{"year":"2018","title":"Language use shapes cultural norms: Large scale evidence from gender","authors":["M Lewis, G Lupyan"],"snippet":"… Results Figure 2 shows the effect size measures derived from the English Wikipedia corpus plotted against effect size estimates reported by CBN from two different models (trained on the Common Crawl and Google News corpora) … model Common Crawl (GloVe) …","url":["http://home.uchicago.edu/~mollylewis/papers/gender_cogsci_2018.pdf"]}
|
| 285 |
+
{"year":"2018","title":"Large scale distributed neural network training through online distillation","authors":["AT Passos, G Pereyra, G Hinton, G Dahl, R Ormandi… - 2018","R Anil, G Pereyra, A Passos, R Ormandi, GE Dahl… - arXiv preprint arXiv …, 2018"],"snippet":"… stochastic gradient descent. We have experiments on Criteo clickthrough rate, and the largest to-date dataset used for neural language modeling, based on Common Crawl and containing $6\\times 10^{11}$ tokens. In these …","url":["https://arxiv.org/pdf/1804.03235","https://research.google.com/pubs/pub46642.html"]}
|
| 286 |
{"year":"2018","title":"Large-Scale Analysis of Style Injection by Relative Path Overwrite","authors":["S Arshad, SA Mirheidari, T Lauinger, B Crispo, E Kirda… - 2018"],"snippet":"… using RPO. We extract pages using relativepath stylesheets from the Common Crawl dataset [9], automatically test if style directives can be injected using RPO, and determine whether they are interpreted by the browser. Out …","url":["http://www.ccs.neu.edu/home/arshad/publications/www2018rpo.pdf"]}
|
| 287 |
{"year":"2018","title":"Latent Question Interpretation Through Parameter Adaptation Using Stochastic Neuron","authors":["T Parshakova, DS Kim"],"snippet":"… In effect, it led to maximally effective behaviour in the question-answering task. 6 Experiments Implementation Details For the word embeddings we use GloVe embeddings pretrained on the 840B Common Crawl corpus [Pennington et al., 2014] …","url":["http://ceur-ws.org/Vol-2134/paper07.pdf"]}
|
| 288 |
{"year":"2018","title":"Latent Semantic Analysis Approach for Document Summarization Based on Word Embeddings","authors":["K Al-Sabahi, Z Zuping, Y Kang - arXiv preprint arXiv:1807.02748, 2018"],"snippet":"… Training is performed on aggregated global word-word co-occurrence statistics from a corpus. In this work, we use the one trained on Common Crawl (840B tokens, 2.2M vocab, cased, 300d vectors, 2.03 GB download): glove.840B.300d.zip. 3.2. LSA algorithm …","url":["https://arxiv.org/pdf/1807.02748"]}
|
|
|
|
| 339 |
{"year":"2018","title":"Multi-turn QA: A RNN Contextual Approach to Intent Classification for Goal-oriented Systems","authors":["M Mensio, G Rizzo, M Morisio - Companion of the The Web Conference 2018 on The …, 2018"],"snippet":"… 8. Christian Buck, Kenneth Heafield, and Bas van Ooyen. 2014. N-gram Counts and Language Models from the Common Crawl Proceedings of the Language Resources and Evaluation Conference (LREC), Vol. Vol. 2. Citeseer, Reykjavik, Iceland, 4 …","url":["https://dl.acm.org/citation.cfm?id=3191539"]}
|
| 340 |
{"year":"2018","title":"Multilingual word embeddings and their utility in cross-lingual learning","authors":["A Kulmizev - 2018"],"snippet":"… language corpus. When said corpus is large enough (eg Wikipedia, Common Crawl, or the concatenation of the two), the resulting DSM can be assumed to represent the distributional semantics of an entire language. The algorithms …","url":["https://addi.ehu.es/bitstream/handle/10810/29083/TFM_Artur_Kulmizev.pdf?sequence=1"]}
|
| 341 |
{"year":"2018","title":"Multimodal Language Analysis with Recurrent Multistage Fusion","authors":["PP Liang, Z Liu, A Zadeh, LP Morency - arXiv preprint arXiv:1808.03920, 2018"],"snippet":"Page 1. Multimodal Language Analysis with Recurrent Multistage Fusion Paul Pu Liang1, Ziyin Liu2, Amir Zadeh2, Louis-Philippe Morency2 1Machine Learning Department, 2Language Technologies Institute Carnegie Mellon …","url":["https://arxiv.org/pdf/1808.03920"]}
|
| 342 |
+
{"year":"2018","title":"Multimodal Language Analysis with Recurrent Multistage Fusion: Supplementary Material","authors":["PP Liang, Z Liu, A Zadeh, LP Morency"],"snippet":"… 1.1 Multimodal Features Here we present extra details on feature extraction for the language, visual and acoustic modalities. Language: We used 300 dimensional Glove word embeddings trained on 840 billion tokens from …","url":["http://www.cs.cmu.edu/~pliang/papers/emnlp2018-recurrent-fusion-supp.pdf","https://aclanthology.org/anthology-files/attachments/D/D18/D18-1014.Attachment.zip"]}
|
| 343 |
{"year":"2018","title":"Natural language processing using a neural network","authors":["B McCann, C Xiong, R Socher - US Patent App. 16/000,638, 2018"],"snippet":"… in the second language. In some examples, training of an MT-LSTM of the encoder 310 uses fixed 300-dimensional word vectors, such as the CommonCrawl-840B GloVe model for English word vectors. These word vectors …","url":["https://patentimages.storage.googleapis.com/f0/42/0e/084fa3f0799a39/US20180349359A1.pdf"]}
|
| 344 |
{"year":"2018","title":"Navigating Online Semantic Resources for Entity Set Expansion","authors":["WT Adrian, M Manna - International Symposium on Practical Aspects of …, 2018"],"snippet":"… the given entry (synset) in BabelNet. WebIsADatabase [25] is a publicly available database containing more than 400 million hypernymy relations extracted from the CommonCrawl web corpus. The tuples of the database are …","url":["https://link.springer.com/chapter/10.1007/978-3-319-73305-0_12"]}
|
| 345 |
{"year":"2018","title":"Near Human-Level Performance in Grammatical Error Correction with Hybrid Machine Translation","authors":["R Grundkiewicz, M Junczys-Dowmunt - arXiv preprint arXiv:1804.05945, 2018"],"snippet":"… ops). All systems use a 5-gram Language Model (LM) and OSM (Durrani et al., 2011) both estimated from the target side of the training data, and a 5-gram LM and 9-gram WCLM trained on Common Crawl data (Buck et al., 2014) …","url":["https://arxiv.org/pdf/1804.05945"]}
|
2019.jsonl
CHANGED
|
@@ -183,7 +183,7 @@
|
|
| 183 |
{"year":"2019","title":"Deep Learning vs. Classic Models on a New Uzbek Sentiment Analysis Dataset","authors":["E Kuriyozov, S Matlatipov, MA Alonso…"],"snippet":"… We use as input the FastText pre-trained word embeddings of size 300 (Grave et al., 2018) for Uzbek language, that were created from Wiki pages and CommonCrawl, 9 which, to our knowledge, are the only available pre-trained …","url":["http://www.grupolys.org/biblioteca/KurMatAloGom2019a.pdf"]}
|
| 184 |
{"year":"2019","title":"Deep Learning-based Categorical and Dimensional Emotion Recognition for Written and Spoken Text","authors":["BT Atmaja, K Shirai, M Akagi - INA-Rxiv. June, 2019"],"snippet":"… meaning. Glove captured the global corpus statistics from the corpus, for example, a Wikipedia document or a common crawl document. In GloVe model, the cost function is given by V ∑ i,j=1 f(Xi,j)(uT i,jvj + bi + cj − log Xi,j)2 (2) …","url":["https://osf.io/fhu29/download/?format=pdf"]}
|
| 185 |
{"year":"2019","title":"Deep Structured Semantic Model for Recommendations in E-commerce","authors":["A Larionova, P Kazakova, N Nikitinsky - International Conference on Hybrid Artificial …, 2019"],"snippet":"… We generated a vector representation for each text by inferring FastText embeddings [4] from their tokens and averaging them (FastText model is pretrained on the Russian language subset of the Common Crawl corpus [10]) …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29859-3_8"]}
|
| 186 |
-
{"year":"2019","title":"Deepening Hidden Representations from Pre-trained Language Models for Natural Language Understanding","authors":["J Yang, H Zhao - arXiv preprint arXiv:1911.01940, 2019","JYH Zhao"],"snippet":"…
|
| 187 |
{"year":"2019","title":"Defending Against Neural Fake News","authors":["R Zellers, A Holtzman, H Rashkin, Y Bisk, A Farhadi… - arXiv preprint arXiv …, 2019"],"snippet":"… Dataset. We present RealNews, a large corpus of news articles from Common Crawl … Thus, we construct one by scraping dumps from Common Crawl, limiting ourselves to the 5000 news domains indexed by Google News …","url":["https://arxiv.org/pdf/1905.12616"]}
|
| 188 |
{"year":"2019","title":"Deliverable 4.2: Data Integration (v. 1)","authors":["A Haller, JD Fernández, A Polleres, MR Kamdar - Work, 2019"],"snippet":"Page 1. Cyber-Physical Social Systems for City-wide Infrastructures Deliverable 4.2: Data Integration (v.1) Authors : Armin Haller, Javier D. Fernández, Axel Polleres, Maulik R. Kamdar Dissemination Level : Public Due date …","url":["http://cityspin.net/wp-content/uploads/2017/10/D4.2-Data-Integration.pdf"]}
|
| 189 |
{"year":"2019","title":"Design and implementation of an open source Greek POS Tagger and Entity Recognizer using spaCy","authors":["E Partalidou, E Spyromitros-Xioufis, S Doropoulos… - IEEE/WIC/ACM International …, 2019"],"snippet":"… 3.4 Evaluation and comparison of results In the first experiment the model was trained using pretrained vectors extracted from two different sources, Common Crawl and Wikipedia and can be found at the official FastText …","url":["https://dl.acm.org/citation.cfm?id=3352543"]}
|
|
@@ -533,7 +533,7 @@
|
|
| 533 |
{"year":"2019","title":"SberQuAD--Russian Reading Comprehension Dataset: Description and Analysis","authors":["P Efimov, L Boytsov, P Braslavski - arXiv preprint arXiv:1912.09723, 2019"],"snippet":"… We tokenized text using spaCy16. To initialize the embedding layer for BiDAF, DocQA, DrQA, and R-Net we use Russian case-sensitive fastText embeddings trained on Common Crawl and Wikipedia17. This initialization is used for both questions and paragraphs …","url":["https://arxiv.org/pdf/1912.09723"]}
|
| 534 |
{"year":"2019","title":"SC-UPB at the VarDial 2019 Evaluation Campaign: Moldavian vs. Romanian Cross-Dialect Topic Identification","authors":["C Onose, DC Cercel, S Trausan-Matu - Proceedings of the Sixth Workshop on NLP …, 2019"],"snippet":"… (2018), Nordic Language Processing Laboratory (NLPL) word embedding repository (Kutuzov et al., 2017) and Common Crawl (CC) word vectors (Grave et al., 2018). The relevant details for each word vector representation model can be viewed in Table 2 …","url":["https://www.aclweb.org/anthology/W19-1418"]}
|
| 535 |
{"year":"2019","title":"Scalable Cross-Lingual Transfer of Neural Sentence Embeddings","authors":["H Aldarmaki, M Diab - arXiv preprint arXiv:1904.05542, 2019"],"snippet":"… We used WMT'12 Common Crawl data for crosslingual alignment, and WMT'12 test sets for evaluations. We used the augmented SNLI data de- scribed in (Dasgupta et al., 2018) and their translations for training the mono-lingual and joint InferSent models …","url":["https://arxiv.org/pdf/1904.05542"]}
|
| 536 |
-
{"year":"2019","title":"SECNLP: A Survey of Embeddings in Clinical Natural Language Processing","authors":["K KS, S Sangeetha - arXiv preprint arXiv:1903.01039, 2019","KK Subramanyam, S Sivanesan - Journal of Biomedical Informatics, 2019"],"snippet":"
|
| 537 |
{"year":"2019","title":"Security In Plain TXT","authors":["A Portier, H Carter, C Lever"],"snippet":"… These seed domains are compiled from a combination of sources, including the Alexa top 1 million, the TLD zone files for COM, NAME, NET, ORG, and BIZ, sites captured by the Common Crawl project, multiple public domain …","url":["http://www.henrycarter.org/papers/plaintxt19.pdf"]}
|
| 538 |
{"year":"2019","title":"Security Posture Based Incident Forecasting","authors":["D Mulugeta - 2019"],"snippet":"Page 1. Page 2. Page 3. Security Posture Based Incident Forecasting A Thesis Submitted to the Faculty of Drexel University by Dagmawi Mulugeta in partial fulfillment of the requirements for the degree of Master of Science June 2019 Page 4 …","url":["http://search.proquest.com/openview/a6f070655e6045b93b595adc3b0965ae/1?pq-origsite=gscholar&cbl=18750&diss=y"]}
|
| 539 |
{"year":"2019","title":"See-Through-Text Grouping for Referring Image Segmentation","authors":["DJ Chen, S Jia, YC Lo, HT Chen, TL Liu - … of the IEEE International Conference on …, 2019"],"snippet":"… The representation st is visual-attended and its goodness is linked to the predicted segmentation map Pt−1. The GloVe model in our implementation is pre-trained on Common Crawl in 840B tokens. Following …","url":["http://openaccess.thecvf.com/content_ICCV_2019/papers/Chen_See-Through-Text_Grouping_for_Referring_Image_Segmentation_ICCV_2019_paper.pdf"]}
|
|
@@ -633,7 +633,7 @@
|
|
| 633 |
{"year":"2019","title":"Towards Functionally Similar Corpus Resources for Translation","authors":["M Kunilovskaya, S Sharoff"],"snippet":"… Secondly, we used lemmatised texts, with stop words filtered out (biLSTMlex in Table 1). For both scenarios we used pre-trained word embeddings of size 300, trained on the English Wikipedia and CommonCrawl data, using …","url":["http://corpus.leeds.ac.uk/serge/publications/2019-RANLP.pdf"]}
|
| 634 |
{"year":"2019","title":"Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning","authors":["D Cevher, S Zepf, R Klinger - arXiv preprint arXiv:1909.02764, 2019"],"snippet":"… We use a neural network with an embedding layer (frozen weights, pretrained on Common Crawl and Wikipedia (Grave et al., 2018)), a bidirectional LSTM (Schuster and Paliwal, 1997), and two dense layers followed by a soft max output layer …","url":["https://arxiv.org/pdf/1909.02764"]}
|
| 635 |
{"year":"2019","title":"Towards Multimodal Sarcasm Detection (An _Obviously_ Perfect Paper)","authors":["S Castro, D Hazarika, V Pérez-Rosas, R Zimmermann… - arXiv preprint arXiv …, 2019"],"snippet":"… 768. We also considered averaging Common Crawl pre-trained 300 dimensional GloVe word vectors (Pennington et al., 2014) for each token; however, it resulted in lower performance as compared to BERT-based features …","url":["https://arxiv.org/pdf/1906.01815"]}
|
| 636 |
-
{"year":"2019","title":"Towards Non-task-specific Distillation of BERT via Sentence Representation Approximation","authors":["B Wu, H Zhang, M Li, Z Wang, Q Feng, J Huang… - arXiv preprint arXiv …, 2020","HZ Bowen Wu, M Li, Z Wang, Q Feng, J Huang…"],"snippet":"…
|
| 637 |
{"year":"2019","title":"Towards Robust Named Entity Recognition for Historic German","authors":["S Schweter, J Baiter - arXiv preprint arXiv:1906.07592, 2019"],"snippet":"… 69.59% Common Crawl 68.97% Wikipedia + Common Crawl 72.00% Wikipedia + Common Crawl + Character 74.50 … 69.62% Riedl and Padó (2018) (with transfer-learning) 74.33% ONB Wikipedia 75.80% CommonCrawl 78.70% Wikipedia + CommonCrawl 79.46 …","url":["https://arxiv.org/pdf/1906.07592"]}
|
| 638 |
{"year":"2019","title":"Towards semantic-rich word embeddings","authors":["G Beringer, M Jabłonski, P Januszewski, A Sobecki…"],"snippet":"… collected (III), for the our approach. We use a pretrained embedding model from spaCy - en_vectors_web_lg, which contains 300-dimensional word vectors trained on Common Crawl with GloVe2. We compare results on the …","url":["https://annals-csis.org/Volume_18/drp/pdf/120.pdf"]}
|
| 639 |
{"year":"2019","title":"Towards Unsupervised Grammatical Error Correction using Statistical Machine Translation with Synthetic Comparable Corpus","authors":["S Katsumata, M Komachi - arXiv preprint arXiv:1907.09724, 2019"],"snippet":"… makes up for the synthetic target data. To compare the fluency, the outputs of each best iter on JFLEG were evaluated with the perplexity based on the Common Crawl language model10. The perplexity of USMTforward in iter …","url":["https://arxiv.org/pdf/1907.09724"]}
|
|
|
|
| 183 |
{"year":"2019","title":"Deep Learning vs. Classic Models on a New Uzbek Sentiment Analysis Dataset","authors":["E Kuriyozov, S Matlatipov, MA Alonso…"],"snippet":"… We use as input the FastText pre-trained word embeddings of size 300 (Grave et al., 2018) for Uzbek language, that were created from Wiki pages and CommonCrawl, 9 which, to our knowledge, are the only available pre-trained …","url":["http://www.grupolys.org/biblioteca/KurMatAloGom2019a.pdf"]}
|
| 184 |
{"year":"2019","title":"Deep Learning-based Categorical and Dimensional Emotion Recognition for Written and Spoken Text","authors":["BT Atmaja, K Shirai, M Akagi - INA-Rxiv. June, 2019"],"snippet":"… meaning. Glove captured the global corpus statistics from the corpus, for example, a Wikipedia document or a common crawl document. In GloVe model, the cost function is given by V ∑ i,j=1 f(Xi,j)(uT i,jvj + bi + cj − log Xi,j)2 (2) …","url":["https://osf.io/fhu29/download/?format=pdf"]}
|
| 185 |
{"year":"2019","title":"Deep Structured Semantic Model for Recommendations in E-commerce","authors":["A Larionova, P Kazakova, N Nikitinsky - International Conference on Hybrid Artificial …, 2019"],"snippet":"… We generated a vector representation for each text by inferring FastText embeddings [4] from their tokens and averaging them (FastText model is pretrained on the Russian language subset of the Common Crawl corpus [10]) …","url":["https://link.springer.com/chapter/10.1007/978-3-030-29859-3_8"]}
|
| 186 |
+
{"year":"2019","title":"Deepening Hidden Representations from Pre-trained Language Models for Natural Language Understanding","authors":["J Yang, H Zhao - arXiv preprint arXiv:1911.01940, 2019","JYH Zhao"],"snippet":"… the other hand. In addition to BooksCorpus and English Wikipedia, it also uses Giga5, ClueWeb 2012-B and Common Crawl for pre-training. Trained with dynamic masking, large mini-batches and a larger bytelevel BPE, full …","url":["https://arxiv.org/pdf/1911.01940","https://deeplearn.org/arxiv/101390/deepening-hidden-representations-from-pre-trained-language-models-for-natural-language-understanding"]}
|
| 187 |
{"year":"2019","title":"Defending Against Neural Fake News","authors":["R Zellers, A Holtzman, H Rashkin, Y Bisk, A Farhadi… - arXiv preprint arXiv …, 2019"],"snippet":"… Dataset. We present RealNews, a large corpus of news articles from Common Crawl … Thus, we construct one by scraping dumps from Common Crawl, limiting ourselves to the 5000 news domains indexed by Google News …","url":["https://arxiv.org/pdf/1905.12616"]}
|
| 188 |
{"year":"2019","title":"Deliverable 4.2: Data Integration (v. 1)","authors":["A Haller, JD Fernández, A Polleres, MR Kamdar - Work, 2019"],"snippet":"Page 1. Cyber-Physical Social Systems for City-wide Infrastructures Deliverable 4.2: Data Integration (v.1) Authors : Armin Haller, Javier D. Fernández, Axel Polleres, Maulik R. Kamdar Dissemination Level : Public Due date …","url":["http://cityspin.net/wp-content/uploads/2017/10/D4.2-Data-Integration.pdf"]}
|
| 189 |
{"year":"2019","title":"Design and implementation of an open source Greek POS Tagger and Entity Recognizer using spaCy","authors":["E Partalidou, E Spyromitros-Xioufis, S Doropoulos… - IEEE/WIC/ACM International …, 2019"],"snippet":"… 3.4 Evaluation and comparison of results In the first experiment the model was trained using pretrained vectors extracted from two different sources, Common Crawl and Wikipedia and can be found at the official FastText …","url":["https://dl.acm.org/citation.cfm?id=3352543"]}
|
|
|
|
| 533 |
{"year":"2019","title":"SberQuAD--Russian Reading Comprehension Dataset: Description and Analysis","authors":["P Efimov, L Boytsov, P Braslavski - arXiv preprint arXiv:1912.09723, 2019"],"snippet":"… We tokenized text using spaCy16. To initialize the embedding layer for BiDAF, DocQA, DrQA, and R-Net we use Russian case-sensitive fastText embeddings trained on Common Crawl and Wikipedia17. This initialization is used for both questions and paragraphs …","url":["https://arxiv.org/pdf/1912.09723"]}
|
| 534 |
{"year":"2019","title":"SC-UPB at the VarDial 2019 Evaluation Campaign: Moldavian vs. Romanian Cross-Dialect Topic Identification","authors":["C Onose, DC Cercel, S Trausan-Matu - Proceedings of the Sixth Workshop on NLP …, 2019"],"snippet":"… (2018), Nordic Language Processing Laboratory (NLPL) word embedding repository (Kutuzov et al., 2017) and Common Crawl (CC) word vectors (Grave et al., 2018). The relevant details for each word vector representation model can be viewed in Table 2 …","url":["https://www.aclweb.org/anthology/W19-1418"]}
|
| 535 |
{"year":"2019","title":"Scalable Cross-Lingual Transfer of Neural Sentence Embeddings","authors":["H Aldarmaki, M Diab - arXiv preprint arXiv:1904.05542, 2019"],"snippet":"… We used WMT'12 Common Crawl data for crosslingual alignment, and WMT'12 test sets for evaluations. We used the augmented SNLI data de- scribed in (Dasgupta et al., 2018) and their translations for training the mono-lingual and joint InferSent models …","url":["https://arxiv.org/pdf/1904.05542"]}
|
| 536 |
+
{"year":"2019","title":"SECNLP: A Survey of Embeddings in Clinical Natural Language Processing","authors":["K KS, S Sangeetha - arXiv preprint arXiv:1903.01039, 2019","KK Subramanyam, S Sivanesan - Journal of Biomedical Informatics, 2019"],"snippet":"Page 1. 1 SECNLP: A Survey of Embeddings in Clinical Natural Language Processing Kalyan KS, S. Sangeetha Text Analytics and Natural Language Processing Lab Department of Computer Applications National …","url":["https://arxiv.org/pdf/1903.01039","https://www.sciencedirect.com/science/article/pii/S1532046419302436"]}
|
| 537 |
{"year":"2019","title":"Security In Plain TXT","authors":["A Portier, H Carter, C Lever"],"snippet":"… These seed domains are compiled from a combination of sources, including the Alexa top 1 million, the TLD zone files for COM, NAME, NET, ORG, and BIZ, sites captured by the Common Crawl project, multiple public domain …","url":["http://www.henrycarter.org/papers/plaintxt19.pdf"]}
|
| 538 |
{"year":"2019","title":"Security Posture Based Incident Forecasting","authors":["D Mulugeta - 2019"],"snippet":"Page 1. Page 2. Page 3. Security Posture Based Incident Forecasting A Thesis Submitted to the Faculty of Drexel University by Dagmawi Mulugeta in partial fulfillment of the requirements for the degree of Master of Science June 2019 Page 4 …","url":["http://search.proquest.com/openview/a6f070655e6045b93b595adc3b0965ae/1?pq-origsite=gscholar&cbl=18750&diss=y"]}
|
| 539 |
{"year":"2019","title":"See-Through-Text Grouping for Referring Image Segmentation","authors":["DJ Chen, S Jia, YC Lo, HT Chen, TL Liu - … of the IEEE International Conference on …, 2019"],"snippet":"… The representation st is visual-attended and its goodness is linked to the predicted segmentation map Pt−1. The GloVe model in our implementation is pre-trained on Common Crawl in 840B tokens. Following …","url":["http://openaccess.thecvf.com/content_ICCV_2019/papers/Chen_See-Through-Text_Grouping_for_Referring_Image_Segmentation_ICCV_2019_paper.pdf"]}
|
|
|
|
| 633 |
{"year":"2019","title":"Towards Functionally Similar Corpus Resources for Translation","authors":["M Kunilovskaya, S Sharoff"],"snippet":"… Secondly, we used lemmatised texts, with stop words filtered out (biLSTMlex in Table 1). For both scenarios we used pre-trained word embeddings of size 300, trained on the English Wikipedia and CommonCrawl data, using …","url":["http://corpus.leeds.ac.uk/serge/publications/2019-RANLP.pdf"]}
|
| 634 |
{"year":"2019","title":"Towards Multimodal Emotion Recognition in German Speech Events in Cars using Transfer Learning","authors":["D Cevher, S Zepf, R Klinger - arXiv preprint arXiv:1909.02764, 2019"],"snippet":"… We use a neural network with an embedding layer (frozen weights, pretrained on Common Crawl and Wikipedia (Grave et al., 2018)), a bidirectional LSTM (Schuster and Paliwal, 1997), and two dense layers followed by a soft max output layer …","url":["https://arxiv.org/pdf/1909.02764"]}
|
| 635 |
{"year":"2019","title":"Towards Multimodal Sarcasm Detection (An _Obviously_ Perfect Paper)","authors":["S Castro, D Hazarika, V Pérez-Rosas, R Zimmermann… - arXiv preprint arXiv …, 2019"],"snippet":"… 768. We also considered averaging Common Crawl pre-trained 300 dimensional GloVe word vectors (Pennington et al., 2014) for each token; however, it resulted in lower performance as compared to BERT-based features …","url":["https://arxiv.org/pdf/1906.01815"]}
|
| 636 |
+
{"year":"2019","title":"Towards Non-task-specific Distillation of BERT via Sentence Representation Approximation","authors":["B Wu, H Zhang, M Li, Z Wang, Q Feng, J Huang… - arXiv preprint arXiv …, 2020","HZ Bowen Wu, M Li, Z Wang, Q Feng, J Huang…"],"snippet":"… distillation. 4.3 Hyperparameters For the student model in our proposed distilling method, we employ the 300-dimension GloVe (840B Common Crawl version; Pennington et al., 2014) to initialize the word embeddings. The …","url":["https://arxiv.org/pdf/2004.03097","https://www.researchgate.net/profile/Bowen_Wu10/publication/337113946_Towards_Non-task-specific_Distillation_of_BERT_via_Sentence_Representation_Approximation/links/5dc5cffc4585151435f7df39/Towards-Non-task-specific-Distillation-of-BERT-via-Sentence-Representation-Approximation.pdf"]}
|
| 637 |
{"year":"2019","title":"Towards Robust Named Entity Recognition for Historic German","authors":["S Schweter, J Baiter - arXiv preprint arXiv:1906.07592, 2019"],"snippet":"… 69.59% Common Crawl 68.97% Wikipedia + Common Crawl 72.00% Wikipedia + Common Crawl + Character 74.50 … 69.62% Riedl and Padó (2018) (with transfer-learning) 74.33% ONB Wikipedia 75.80% CommonCrawl 78.70% Wikipedia + CommonCrawl 79.46 …","url":["https://arxiv.org/pdf/1906.07592"]}
|
| 638 |
{"year":"2019","title":"Towards semantic-rich word embeddings","authors":["G Beringer, M Jabłonski, P Januszewski, A Sobecki…"],"snippet":"… collected (III), for the our approach. We use a pretrained embedding model from spaCy - en_vectors_web_lg, which contains 300-dimensional word vectors trained on Common Crawl with GloVe2. We compare results on the …","url":["https://annals-csis.org/Volume_18/drp/pdf/120.pdf"]}
|
| 639 |
{"year":"2019","title":"Towards Unsupervised Grammatical Error Correction using Statistical Machine Translation with Synthetic Comparable Corpus","authors":["S Katsumata, M Komachi - arXiv preprint arXiv:1907.09724, 2019"],"snippet":"… makes up for the synthetic target data. To compare the fluency, the outputs of each best iter on JFLEG were evaluated with the perplexity based on the Common Crawl language model10. The perplexity of USMTforward in iter …","url":["https://arxiv.org/pdf/1907.09724"]}
|
2020.jsonl
CHANGED
|
@@ -23,7 +23,7 @@
|
|
| 23 |
{"year":"2020","title":"A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal","authors":["D Gholipour Ghalandari, C Hokamp, J Glover, G Ifrim - arXiv, 2020","DG Ghalandari, C Hokamp, NT Pham, J Glover, G Ifrim - arXiv preprint arXiv …, 2020"],"snippet":"… We also automatically extend these source articles by looking for related articles in the Common Crawl archive … Table 1: Example event summary and linked source ar- ticles from the Wikipedia Current Events Portal, and …","url":["https://arxiv.org/pdf/2005.10070","https://ui.adsabs.harvard.edu/abs/2020arXiv200510070G/abstract"]}
|
| 24 |
{"year":"2020","title":"A Large-Scale Semi-Supervised Dataset for Offensive Language Identification","authors":["S Rosenthal, P Atanasova, G Karadzhov, M Zampieri… - arXiv preprint arXiv …, 2020"],"snippet":"… The first layer of the LSTM model is an embedding layer, which we initialize with a concatenation of the GloVe 300-dimensional (Pennington et al., 2014) and FastText's Common Crawl 300dimensional embeddings (Grave et al., 2018). The Page 5 …","url":["https://arxiv.org/pdf/2004.14454"]}
|
| 25 |
{"year":"2020","title":"A Longitudinal Analysis of Job Skills for Entry-Level Data Analysts","authors":["T Dong, J Triche - Journal of Information Systems Education, 2020"],"snippet":"… Therefore, we used the Common Crawl dataset to address this problem (http:// commoncrawl.org/). Common Crawl is a non-profit organization that builds and maintains an open repository of web crawl data that is, in essence, a copy of the Internet …","url":["http://jise.org/Volume31/n4/JISEv31n4p312.pdf"]}
|
| 26 |
-
{"year":"2020","title":"A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages","authors":["P Ortiz Suárez, L Romary, B Sagot - arXiv, 2020","PO Suárez, L Romary, B Sagot - arXiv preprint arXiv:2006.06202, 2020"],"snippet":"…
|
| 27 |
{"year":"2020","title":"A Multilingual Evaluation for Online Hate Speech Detection","authors":["M Corazza, S Menini, E Cabrio, S Tonelli, S Villata - ACM Transactions on Internet …, 2020"],"snippet":"… In particular, we use the Italian and German embeddings trained on Common Crawl and Wikipedia [33] with size 300 … English Fasttext Crawl embeddings: English embeddings trained by Fasttext9 on Common Crawl with an embedding size of 300 …","url":["https://dl.acm.org/doi/abs/10.1145/3377323"]}
|
| 28 |
{"year":"2020","title":"A Neural-based model to Predict the Future Natural Gas Market Price through Open-domain Event Extraction","authors":["MT Chau, D Esteves, J Lehmann"],"snippet":"… Strong baseline We feed the price and sentence embedding of filtered news using spaCy small English (Context tensor trained on [39], 300-d embedding vector) and large English model (trained on both [39] and Common Crawl …","url":["http://ceur-ws.org/Vol-2611/paper2.pdf"]}
|
| 29 |
{"year":"2020","title":"A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUAL BILSTM NETWORK","authors":["R Shelke, D Thakore"],"snippet":"… It provides word embeddings for Hindi (and 157 other languages) and is based on the CBOW (Continuous Bag-of-Words) model. The CBOW model learns by predicting the current word based on its context, and it was trained …","url":["http://www.academia.edu/download/63216061/120200506-26612-102sbv8.pdf"]}
|
|
@@ -398,7 +398,7 @@
|
|
| 398 |
{"year":"2020","title":"Gender Detection on Social Networks using Ensemble Deep Learning","authors":["K Kowsari, M Heidarysafa, T Odukoya, P Potter… - arXiv preprint arXiv …, 2020"],"snippet":"… 25d, 50d, 100d, and 200d vectors. This word embedding is trained over even bigger corpora, including Wikipedia and Common Crawl content. The objective function is as follows: f(wi − wj, ˜wk) = Pik Pjk (2) where wi is refer to …","url":["https://arxiv.org/pdf/2004.06518"]}
|
| 399 |
{"year":"2020","title":"Gender stereotype reinforcement: Measuring the gender bias conveyed by ranking algorithms","authors":["A Fabris, A Purpura, G Silvello, GA Susto - Information Processing & Management, 2020"],"snippet":"… Corrado, Dean, 2013). Most frequently, they are learnt from large text corpora available online (such as Wikipedia, Google News and Common Crawl, capturing semantic relationships of words based on their usage. Recent work …","url":["https://arxiv.org/pdf/2009.01334"]}
|
| 400 |
{"year":"2020","title":"Gender stereotypes are reflected in the distributional structure of 25 languages","authors":["M Lewis, G Lupyan - Nature Human Behaviour, 2020"],"snippet":"Cultural stereotypes such as the idea that men are more suited for paid work and women are more suited for taking care of the home and family, may contribute to gender imbalances in science, technology, engineering and …","url":["https://www.nature.com/articles/s41562-020-0918-6"]}
|
| 401 |
-
{"year":"2020","title":"Generalisation of Cyberbullying Detection","authors":["K Richard, L Marc-André - arXiv preprint arXiv:2009.01046, 2020","MA Larochelle, R Khoury"],"snippet":"… We use FastText pre-trained on Common Crawl data featuring 300 dimensions and 2 million word vectors with subword
|
| 402 |
{"year":"2020","title":"Generalize Sentence Representation with Self-Inference","authors":["KC Yang, HY Kao"],"snippet":"… Our model is trained with the phrases in the parse trees and tested on the whole sentence. Experimental Settings We initialize word embeddings using the pretrained FastText common-crawl vectors (Mikolov et al. 2018) and freeze the weights during training …","url":["https://www.aaai.org/Papers/AAAI/2020GB/AAAI-YangKC.7098.pdf"]}
|
| 403 |
{"year":"2020","title":"Generating Categories for Sets of Entities","authors":["S Zhang, K Balog, J Callan - arXiv preprint arXiv:2008.08428, 2020"],"snippet":"… entity linking for tables and table schema to predicate matching. Ritze et al. [31] propose an iterative method for matching tables to DBpedia. They develop a manually annotated dataset for matching between a Web table corpus …","url":["https://arxiv.org/pdf/2008.08428"]}
|
| 404 |
{"year":"2020","title":"Generating Diverse Conversation Responses by Creating and Ranking Multiple Candidates","authors":["YP Ruan, ZH Ling, X Zhu, Q Liu, JC Gu - Computer Speech & Language, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0885230820300048"]}
|
|
@@ -493,7 +493,7 @@
|
|
| 493 |
{"year":"2020","title":"Introduction to Cloud Computing and Amazon Web Services (AWS)","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… 5 examples. IAM and S3 sections are necessary for Chapters 6 and 7 since we will be using data compiled by a nonprofit called common crawl which is only publicly available on S3 through AWS open registry. You will have …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_3"]}
|
| 494 |
{"year":"2020","title":"Introduction to Common Crawl Datasets","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"The Common Crawl Foundation (https://commoncrawl.org/) is a 501(c)(3) nonprofit involved in providing open access web crawl data going back to over eight years. They perform monthly web crawls which cover over 25 billion pages for each month. This …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_6"]}
|
| 495 |
{"year":"2020","title":"Introduction to Web Scraping","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… We will introduce natural language processing algorithms in Chapter 4, and we will put them into action in Chapters 6 and 7 on a Common Crawl dataset. The next step is loading the cleaned data from the preceding step into an appropriate database …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_1"]}
|
| 496 |
-
{"year":"2020","title":"Is Everything Fine, Grandma? Acoustic and Linguistic Modeling for Robust Elderly Speech Emotion Recognition","authors":["G Sogancıoglu, O Verkholyak, H Kaya, D Fedotov… - INTERSPEECH, Shanghai …, 2020","G Soğancıoğlu, O Verkholyak, H Kaya, D Fedotov… - arXiv preprint arXiv …, 2020"],"snippet":"…
|
| 497 |
{"year":"2020","title":"Is language modeling enough? Evaluating effective embedding combinations","authors":["R Schneider, T Oberhauser, P Grundmann, FA Gers… - 2020"],"snippet":"… 2.1. Universal Text Embeddings Recently, researchers explore universal text embeddings trained on extensive Web corpora, such as the Common Crawl6 (Mikolov et al., 2018; Radford et al., 2019), the billion … 5https …","url":["https://eprints.soton.ac.uk/438613/1/LREC20_LM_TM_27_1_.pdf"]}
|
| 498 |
{"year":"2020","title":"Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation","authors":["B Eikema, W Aziz - arXiv preprint arXiv:2005.10283, 2020"],"snippet":"… For English-Nepali we also use a translated version of the Penn Treebank4 and for English-Sinhala we additionally use Open Subtitles (Lison et al., 2018). We use a filtered crawl of Wikipedia and Common Crawl released in Guzmán et al …","url":["https://arxiv.org/pdf/2005.10283"]}
|
| 499 |
{"year":"2020","title":"Is Wikipedia succeeding in reducing gender bias? Assessing changes in gender bias in Wikipedia using word embeddings","authors":["KG Schmahl, TJ Viering, S Makrodimitris, AN Jahfari… - Proceedings of the Fourth …, 2020"],"snippet":"… These categories have shown significant bias towards male or female words in embeddings from Google News corpora [Mikolov et al., 2013a], Google Books [Jones et al., 2020], as well as a 'Common Crawl' corpus [Caliskan et al., 2017] …","url":["https://www.aclweb.org/anthology/2020.nlpcss-1.11.pdf"]}
|
|
@@ -501,7 +501,7 @@
|
|
| 501 |
{"year":"2020","title":"Italian Transformers Under the Linguistic Lens","authors":["A Miaschip, G Sartim, D Brunato, F Dell'Orletta… - Proceedings of the Seventh …, 2020"],"snippet":"… For instance, we can notice that, for both the probing models, features related to the distribution of syntactic relations (SyntacticDep) are better predicted by GePpeTto, while GilBERTo and UmBERTo-Commoncrawl are the best …","url":["http://ceur-ws.org/Vol-2769/paper_56.pdf"]}
|
| 502 |
{"year":"2020","title":"JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation","authors":["Z Mao, F Cromieres, R Dabre, H Song, S Kurohashi - arXiv preprint arXiv:2005.03361, 2020"],"snippet":"… Mono Ja Common Crawl 22M En News Crawl 22M Ru News Crawl 22M … 5.1.2. Monolingual data We use monolingual data containing 22M Japanese, 22M English and 22M Russian sentences randomly sub-sampled from Common Crawl dataset and News crawl4 dataset …","url":["https://arxiv.org/pdf/2005.03361"]}
|
| 503 |
{"year":"2020","title":"Joint Multiclass Debiasing of Word Embeddings","authors":["R Popović, F Lemmerich, M Strohmaier - arXiv preprint arXiv:2003.11520, 2020"],"snippet":"… As in previous studies [7], evaluation was done on three pretrained Word Embedding models with vector dimension of 300: FastText2(English we- bcrawl and Wikipedia, 2 million words), GloVe3(Common Crawl, Wikipedia …","url":["https://arxiv.org/pdf/2003.11520"]}
|
| 504 |
-
{"year":"2020","title":"Joint translation and unit conversion for end-to-end localization","authors":["G Dinu, P Mathur, M Federico, S Lauly, Y Al-Onaizan - arXiv preprint arXiv …, 2020","GDPMMFSL YaserAl-Onaizan, AWS Amazon"],"snippet":"… Europarl (Koehn, 2005) and news commentary data from WMT En→De shared task 2019 totalling 2.2 million sentences.2 Standard translation test sets do not have, however, enough examples of unit conversions and in fact
|
| 505 |
{"year":"2020","title":"KBPearl: a knowledge base population system supported by joint entity and relation linking","authors":["X Lin, H Li, H Xin, Z Li, L Chen - Proceedings of the VLDB Endowment, 2020"],"snippet":"Page 1. KBPearl: A Knowledge Base Population System Supported by Joint Entity and Relation Linking Xueling Lin, Haoyang Li, Hao Xin, Zijian Li, Lei Chen Department of Computer Science and Engineering The Hong Kong …","url":["https://dl.acm.org/doi/pdf/10.14778/3384345.3384352"]}
|
| 506 |
{"year":"2020","title":"Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation","authors":["W Zhang, X Li, Y Yang, R Dong, G Luo - Future Internet, 2020"],"snippet":"Recently, the pretraining of models has been successfully applied to unsupervised and semi-supervised neural machine translation. A cross-lingual language model uses a pretrained masked language model to initialize the …","url":["https://www.mdpi.com/1999-5903/12/12/215/pdf"]}
|
| 507 |
{"year":"2020","title":"Kernel compositional embedding and its application in linguistic structured data classification","authors":["H Ganji, MM Ebadzadeh, S Khadivi - Knowledge-Based Systems, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0950705120300460"]}
|
|
@@ -544,7 +544,7 @@
|
|
| 544 |
{"year":"2020","title":"Leveraging Structured Metadata for Improving Question Answering on the Web","authors":["X Du, A Hassan, A Fourney, R Sim, P Bennett… - … of the 1st Conference of the …, 2020"],"snippet":"… website content. The Web Data Commons project (Mühleisen and Bizer, 2012) estimates that 0.9 billion HTML pages out of the 2.5 billion pages (37.1%) in the Common Crawl web corpus1 contain structured metadata. Figure …","url":["https://www.aclweb.org/anthology/2020.aacl-main.55.pdf"]}
|
| 545 |
{"year":"2020","title":"LIG-Health at Adhoc and Spoken IR Consumer Health Search: expanding queries using UMLS and FastText.","authors":["P Mulhem, GG Saez, A Mannion, D Schwab, J Frej - Conference and Labs of the …, 2020"],"snippet":"… The FastText embedding vector of a word is the sum of the vectors of its component ngrams. We used the pre-trained word vectors for English language, trained on Common Crawl and Wikipedia using FastText. The features of the model used are as follows; …","url":["http://www.dei.unipd.it/~ferro/CLEF-WN-Drafts/CLEF2020/paper_129.pdf"]}
|
| 546 |
{"year":"2020","title":"LIMSI@ WMT 2020","authors":["SA Rauf, JC Rosales, I Paris, PM Quang, S Paris…"],"snippet":"… Domain Corpus sents. words words (en) (de) web Paracrawl 50,875 978 919 economy Tilde EESC 2,858 61 58 news Commoncrawl 2,399 51 47 Tilde rapid 940 20 19 News commentary 361 8 8 tourism Tilde tourism 7 …","url":["http://statmt.org/wmt20/pdf/2020.wmt-1.86.pdf"]}
|
| 547 |
-
{"year":"2020","title":"Linguistic Structure Guided Context Modeling for Referring Image Segmentation","authors":["F Zhang, J Han","T Hui, S Liu, S Huang, G Li, S Yu, F Zhang, J Han"],"snippet":"… rate. CNN is fixed during training. We use batch size 1 and stop training after 700K iterations. GloVe word embeddings [30] pretrained on Common Crawl with 840B tokens are used to replace randomly initialized ones. For
|
| 548 |
{"year":"2020","title":"Linguistically-aware Attention for Reducing the Semantic-Gap in Vision-Language Tasks","authors":["G KV, A Nambiar, KS Srinivas, A Mittal - arXiv preprint arXiv:2008.08012, 2020"],"snippet":"… The pre-trained word-to-vector networks such as Glove [29] and Bert [30] are inexpensive and rich in making linguistic correlations (since they are already trained on a large textual corpus such as Common Crawl and Wikipedia2014) …","url":["https://arxiv.org/pdf/2008.08012"]}
|
| 549 |
{"year":"2020","title":"LNMap: Departures from Isomorphic Assumption in Bilingual Lexicon Induction Through Non-Linear Mapping in Latent Space","authors":["T Mohiuddin, MS Bari, S Joty - arXiv preprint arXiv:2004.13889, 2020"],"snippet":"… English, Italian, and German em- beddings were trained on WacKy crawling corpora using CBOW (Mikolov et al., 2013b), while Spanish and Finnish embeddings were trained on WMT News Crawl and Common Crawl, respectively. 4.2 Baseline Methods …","url":["https://arxiv.org/pdf/2004.13889"]}
|
| 550 |
{"year":"2020","title":"Localizing Open-Ontology QA Semantic Parsers in a Day Using Machine Translation","authors":["M Moradshahi, G Campagna, SJ Semnani, S Xu… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Localizing Open-Ontology QA Semantic Parsers in a Day Using Machine Translation Mehrad Moradshahi Giovanni Campagna Sina J. Semnani Silei Xu Monica S. Lam Computer Science Department Stanford University …","url":["https://arxiv.org/pdf/2010.05106"]}
|
|
@@ -658,7 +658,7 @@
|
|
| 658 |
{"year":"2020","title":"On the Language Neutrality of Pre-trained Multilingual Representations","authors":["J Libovický, R Rosa, A Fraser - arXiv preprint arXiv:2004.05160, 2020"],"snippet":"… XLM-RoBERTa. Conneau et al. (2019) claim that the original mBERT is under-trained and train a similar model on a larger dataset that consists of two terabytes of plain text extracted from CommonCrawl (Wenzek et al., 2019) …","url":["https://arxiv.org/pdf/2004.05160"]}
|
| 659 |
{"year":"2020","title":"On the Persistence of Persistent Identifiers of the Scholarly Web","authors":["M Klein, L Balakireva - arXiv preprint arXiv:2004.03011, 2020"],"snippet":"… These findings were confirmed in a large scale study by Thompson and Jian [16] based on two samples of the web taken from Common Crawl6 datasets … Thompson, HS, Tong, J.: Can common crawl reliably track persistent identifier (PID) use over time …","url":["https://arxiv.org/pdf/2004.03011"]}
|
| 660 |
{"year":"2020","title":"On the synthesis of metadata tags for HTML files","authors":["P Jiménez, JC Roldán, FO Gallego, R Corchuelo - Software: Practice and Experience"],"snippet":"… Recently, an analysis of the 32.04 million domains in the November 2019 Common Crawl has revealed that only 11.92 million domains provide metadata tags,1 which clearly argues for a method that helps software agents deal with the documents provided by the remaining …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.2886"]}
|
| 661 |
-
{"year":"2020","title":"On using Product-Specific Schema. org from Web Data Commons: An Empirical Set of Best Practices","authors":["R Kiran Selvam, M Kejriwal - arXiv e-prints, 2020","RK Selvam, M Kejriwal - arXiv preprint arXiv:2007.13829, 2020"],"snippet":"… on e-commerce websites. The Web Data Commons (WDC) project has extracted schema.org data at scale from webpages in the Common Crawl and made it available as an RDF
|
| 662 |
{"year":"2020","title":"On-The-Fly Information Retrieval Augmentation for Language Models","authors":["H Wang, D McAllester - Proceedings of the First Joint Workshop on Narrative …, 2020"],"snippet":"… News etc. For language modelling we use the NY Times portion because it is written by native English speakers. Since GPT 2.0 is trained on Common Crawl which contains news collections started from 2008. To avoid testing …","url":["https://www.aclweb.org/anthology/2020.nuse-1.14.pdf"]}
|
| 663 |
{"year":"2020","title":"One Belt, One Road, One Sentiment? A Hybrid Approach to Gauging Public Opinions on the New Silk Road Initiative","authors":["JK Chandra, E Cambria, A Nanetti"],"snippet":"… ABSA. We used the Common Crawl GloVe version [44], a pre-trained 300-dimension vector representation database of 840 billion tokens and 2.2 million vocabulary, to convert our preprocessed tweets into word embeddings …","url":["https://sentic.net/one-belt-one-road-one-sentiment.pdf"]}
|
| 664 |
{"year":"2020","title":"Open Information Extraction as Additional Source for Kazakh Ontology Generation","authors":["N Khairova, S Petrasova, O Mamyrbayev, K Mukhsina - Asian Conference on …, 2020"],"snippet":"… also for many others. For example, an experiment was conducted in [19] for assessing the adequacy of measuring the factual density of 50 randomly selected Spanish documents in the CommonCrawl corpus. In a recent study …","url":["https://link.springer.com/chapter/10.1007/978-3-030-41964-6_8"]}
|
|
@@ -718,7 +718,7 @@
|
|
| 718 |
{"year":"2020","title":"Question Answering When Knowledge Bases are Incomplete","authors":["C Pradel, D Sileo, Á Rodrigo, A Peñas, E Agirre - International Conference of the …, 2020","E Agirre - … IR Meets Multilinguality, Multimodality, and Interaction …"],"snippet":"… with bag of word embeddings. We use FastText CommonCrawl word embeddings [10] 4 and a max pooling to produce the continuous bag of word representations of table columns and the question text. The column bag of words …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=IxP9DwAAQBAJ&oi=fnd&pg=PA43&dq=commoncrawl&ots=BCbV87DfTS&sig=kVIo_AYLn9xgMPpxB-rDuk1jzEg","https://link.springer.com/chapter/10.1007/978-3-030-58219-7_4"]}
|
| 719 |
{"year":"2020","title":"Question Type Classification Methods Comparison","authors":["T Seidakhmetov - arXiv preprint arXiv:2001.00571, 2020"],"snippet":"… The GLoVe vectors were pre-trained using 840 billion tokens from Common Crawl, and each token is mapped into a 300-dimensional vector [3]. Xembeddings = GloveEmbedding( Xword) ∈ RNxDword where Dword is a number of dimensions of a word vector …","url":["https://arxiv.org/pdf/2001.00571"]}
|
| 720 |
{"year":"2020","title":"Questioning the Use of Bilingual Lexicon Induction as an Evaluation Task for Bilingual Word Embeddings","authors":["B Marie, A Fujita"],"snippet":"… gual word embeddings. In fact, this corpus was significantly smaller than the Wikipedia corpora for all the other languages, and than the Finnish Common Crawl corpus used to train Finnish Vecmap-emb. Another finding is …","url":["https://www.anlp.jp/proceedings/annual_meeting/2020/pdf_dir/P5-14.pdf"]}
|
| 721 |
-
{"year":"2020","title":"
|
| 722 |
{"year":"2020","title":"Recent Trends in the Use of Deep Learning Models for Grammar Error Handling","authors":["M Naghshnejad, T Joshi, VN Nair - arXiv preprint arXiv:2009.02358, 2020"],"snippet":"Page 1. 1 Recent Trends in the Use of Deep Learning Models for Grammar Error Handling Mina Naghshnejad1, Tarun Joshi, and Vijayan N. Nair Corporate Model Risk, Wells Fargo2 Abstract Grammar error handling (GEH) is …","url":["https://arxiv.org/pdf/2009.02358"]}
|
| 723 |
{"year":"2020","title":"Recipes for Adapting Pre-trained Monolingual and Multilingual Models to Machine Translation","authors":["AC Stickland, X Li, M Ghazvininejad - arXiv preprint arXiv:2004.14911, 2020"],"snippet":"Page 1. Recipes for Adapting Pre-trained Monolingual and Multilingual Models to Machine Translation Asa Cooper Stickland♣ Xian Li♠ ♣ University of Edinburgh, ♠ Facebook AI [email protected], {xianl,ghazvini}@fb.com Marjan Ghazvininejad♠ Abstract …","url":["https://arxiv.org/pdf/2004.14911"]}
|
| 724 |
{"year":"2020","title":"ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning","authors":["W Yu, Z Jiang, Y Dong, J Feng - arXiv preprint arXiv:2002.04326, 2020"],"snippet":"Page 1. Published as a conference paper at ICLR 2020 RECLOR: AREADING COMPREHENSION DATASET REQUIRING LOGICAL REASONING Weihao Yu∗, Zihang Jiang∗, Yanfei Dong & Jiashi Feng National University …","url":["https://arxiv.org/pdf/2002.04326"]}
|
|
@@ -800,7 +800,7 @@
|
|
| 800 |
{"year":"2020","title":"Sociolinguistic Properties of Word Embeddings","authors":["A Arseniev-Koehler, JG Foster - SocArXiv. August, 2020"],"snippet":"… These studies use large, commonly available pre-trained embeddings or their training corpora, such as Google News, web data (Common Crawl), and Google Books … They replicated results using a pretrained model on Common Crawl data …","url":["https://osf.io/b8kud/download"]}
|
| 801 |
{"year":"2020","title":"Software for creating and analyzing semantic representations","authors":["FÅ Nielsen, LK Hansen - Statistical Semantics, 2020"],"snippet":"… This package provides models for the tagger, parser, named-entity recognizer and distributional semantic vectors trained on OntoNotes Release 5 and the Common Crawl dataset … 10 K–50 K. 300. 29 languages. GloVe. Common …","url":["https://link.springer.com/chapter/10.1007/978-3-030-37250-7_3"]}
|
| 802 |
{"year":"2020","title":"Spoken words as biomarkers: using machine learning to gain insight into communication as a predictor of anxiety","authors":["G Demiris, KL Corey Magan, D Parker Oliver… - Journal of the American …, 2020"],"snippet":"… The validity of using cosine distance in an embedding space to measure text similarity depends largely on how well the embedding space represents the semantic concepts present in the text. In our case, the word embeddings …","url":["https://academic.oup.com/jamia/advance-article-abstract/doi/10.1093/jamia/ocaa049/5831105"]}
|
| 803 |
-
{"year":"2020","title":"
|
| 804 |
{"year":"2020","title":"Stanza: A Python Natural Language Processing Toolkit for Many Human Languages","authors":["P Qi, Y Zhang, Y Zhang, J Bolton, CD Manning - arXiv preprint arXiv:2003.07082, 2020"],"snippet":"… For the character-level language models in the NER component, we pretrained them on a mix of the Common Crawl and Wikipedia dumps, and the news corpora released by the WMT19 Shared Task (Barrault et al., 2019), with …","url":["https://arxiv.org/pdf/2003.07082"]}
|
| 805 |
{"year":"2020","title":"STIL--Simultaneous Slot Filling, Translation, Intent Classification, and Language Identification: Initial Results using mBART on MultiATIS++","authors":["JGM FitzGerald - arXiv preprint arXiv:2010.00760, 2020"],"snippet":"… The mBART.cc25 model was trained on 25 languages for 500k steps using a 1.4 TB corpus of scraped website data taken from Common Crawl (Wenzek et al., 2019). The model was trained to reconstruct masked tokens and to rearrange scrambled sentences …","url":["https://arxiv.org/pdf/2010.00760"]}
|
| 806 |
{"year":"2020","title":"STILTool: A Semantic Table Interpretation evaLuation Tool","authors":["E Jimenez-Ruiz, A Maurino - The Semantic Web: ESWC 2020 Satellite Events …","M Cremaschi, A Siano, R Avogadro, E Jimenez-Ruiz…"],"snippet":"… In order to size the spread of tabular data, 2.5 M tables have been identified within the Common Crawl repository1 [3]. The current snapshot of Wikipedia contains more than 3.23 M tables from more than 520k Wikipedia articles …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=C0UIEAAAQBAJ&oi=fnd&pg=PA61&dq=commoncrawl&ots=OcUKD8orbe&sig=5EUZjTQOLRGuwqaWXWRmrck1S50","https://preprints.2020.eswc-conferences.org/posters_demos/paper_293.pdf"]}
|
|
@@ -808,7 +808,7 @@
|
|
| 808 |
{"year":"2020","title":"Study and Creation of Datasets for Comparative Questions Classification","authors":["S Stahlhacke"],"snippet":"… The data used by the system is a preprocessed version of the Common Crawl Text Corpus8, which crawled from the world wide web … Which one is better suited for me, Xbox One or PS4? 8https://commoncrawl.org/ 4 Page 11. CHAPTER 1. INTRODUCTION …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/teaching/theses/completed-theses/2020-ma-stahlhacke.pdf"]}
|
| 809 |
{"year":"2020","title":"Studying the Evolution of Greek Words via Word Embeddings","authors":["V Barzokas, E Papagiannopoulou, G Tsoumakas - 11th Hellenic Conference on …, 2020"],"snippet":"… Despite the limited size of the Greek corpus compared to Common Crawl and Wikipedia used for the pre-trained fastText embeddings, we didn't detect any notable difference in the quality of our models in comparison with the pre-trained one …","url":["https://dl.acm.org/doi/abs/10.1145/3411408.3411425"]}
|
| 810 |
{"year":"2020","title":"Substance over Style: Document-Level Targeted Content Transfer","authors":["A Hegel, S Rao, A Celikyilmaz, B Dolan - arXiv preprint arXiv:2010.08618, 2020"],"snippet":"Page 1. Substance over Style: Document-Level Targeted Content Transfer Allison Hegel1∗ Sudha Rao2 Asli Celikyilmaz2 Bill Dolan2 1Lexion, Seattle, WA, USA 2Microsoft Research, Redmond, WA, USA [email protected] {sudhra,aslicel,billdol}@microsoft.com Abstract …","url":["https://arxiv.org/pdf/2010.08618"]}
|
| 811 |
-
{"year":"2020","title":"Subword Segmentation and a Single Bridge Language Affect Zero-Shot Neural Machine Translation","authors":["A Rios, M Müller, R Sennrich - arXiv preprint arXiv:2011.01703, 2020","AR Gonzales, M Müller, R Sennrich - Proceedings of the Fifth Conference on …, 2020"],"snippet":"… Page 3.
|
| 812 |
{"year":"2020","title":"Suggesting Citations for Wikidata Claims based on Wikipedia's External References","authors":["P Curotto, A Hogan"],"snippet":"… Offline: Given that some Wikidata items do not have an associated Wikipedia article, that many Wikipedia articles have few references, etc., it would be interesting to develop a broader corpus with more documents from the Web, perhaps from the Common Crawl …","url":["http://aidanhogan.com/docs/wikidata-references.pdf"]}
|
| 813 |
{"year":"2020","title":"Supervised Understanding of Word Embeddings","authors":["HZ Yerebakan, P Bhatia, Y Shinagawa"],"snippet":"… In our experiments, we have used scikit-learn linear logistic regression model with a positive class weight of 2 to enhance the effect of positive words. We have used top 250k words of Fasttext Common Crawl word …","url":["https://rcqa-ws.github.io/papers/paper8.pdf"]}
|
| 814 |
{"year":"2020","title":"Surface pattern-enhanced relation extraction with global constraints","authors":["H Jiang, JT Liu, S Zhang, D Yang, Y Xiao, W Wang - Knowledge and Information …, 2020"],"snippet":"Relation extraction is one of the most important tasks in information extraction. The traditional works either use sentences or surface patterns (ie, the.","url":["https://link.springer.com/article/10.1007/s10115-020-01502-y"]}
|
|
@@ -835,7 +835,7 @@
|
|
| 835 |
{"year":"2020","title":"Text-based classification of interviews for mental health--juxtaposing the state of the art","authors":["JV Wouts - arXiv preprint arXiv:2008.01543, 2020"],"snippet":"… Model name Pretrain corpus Tokenizer type Acc Sentiment analysis belabBERT Common Crawl Dutch (non-shuffled) BytePairEncoding 95.92∗ % RobBERT Common Crawl Dutch (shuffled) BytePairEncoding 94.42 …","url":["https://arxiv.org/pdf/2008.01543"]}
|
| 836 |
{"year":"2020","title":"TextSETTR: Label-Free Text Style Extraction and Tunable Targeted Restyling","authors":["P Riley, N Constant, M Guo, G Kumar, D Uthus… - arXiv preprint arXiv …, 2020"],"snippet":"… Furthermore, we demonstrate that a single model trained on unlabeled Common Crawl data is capable of transferring along multiple dimensions including dialect, emotiveness, formality, politeness, and sentiment. 1 INTRODUCTION …","url":["https://arxiv.org/pdf/2010.03802"]}
|
| 837 |
{"year":"2020","title":"TF-CR: Weighting Embeddings for Text Classification","authors":["A Zubiaga - arXiv preprint arXiv:2012.06606, 2020"],"snippet":"… Page 6. • cglove: GloVe embeddings trained from Common Crawl. • wglove: GloVe embeddings trained from Wikipedia.6 We use two different classifiers for these experiments, SVM and Logistic Regression, which are known …","url":["https://arxiv.org/pdf/2012.06606"]}
|
| 838 |
-
{"year":"2020","title":"The 2019 BBN Cross-lingual Information Retrieval System","authors":["DK Le Zhang, W Hartmann, M Srivastava, L Tarlin… - LREC 2020 Language Resources …","L Zhang, D Karakos, W Hartmann, M Srivastava… - Proceedings of the …, 2020"],"snippet":"…
|
| 839 |
{"year":"2020","title":"The 2020 bilingual, bi-directional webnlg+ shared task overview and evaluation results (webnlg+ 2020)","authors":["TC Ferreira, C Gardent, C van der Lee, N Ilinykh… - Proceedings of the 3rd …, 2020"],"snippet":"… 3.3 Mono-task, Bilingual Approaches cuni-ufal. The mBART model (Liu et al., 2020) is pre-trained for multilingual denoising on the large-scale multilingual CC25 corpus extracted from Common Crawl, which contains …","url":["https://webnlg-challenge.loria.fr/files/2020.webnlg-papers.7.pdf"]}
|
| 840 |
{"year":"2020","title":"THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES","authors":["M Toshevska, F Stojanovska, J Kalajdjieski"],"snippet":"… architectures [25]. In our experiments, we have used pre-trained models both trained with subword information on Wikipedia 2017 (16B tokens) and trained with subword information on Common Crawl (600B tokens)4. 2https …","url":["http://www.academia.edu/download/63915170/120200714-10552-nn915u.pdf"]}
|
| 841 |
{"year":"2020","title":"The ADAPT Centre's neural MT systems for the WAT 2020 document-level translation task","authors":["W Jooste, R Haque, A Way - 2020"],"snippet":"… Finally, source-language monolingual data with n-grams similar to that of the documents in the test set was mined from the Common Crawl Corpus6 to be used as a source-side original synthetic corpus (SOSC) for fine-tuning the NMT model parameters …","url":["http://doras.dcu.ie/25205/1/WAT_2020.pdf"]}
|
|
|
|
| 23 |
{"year":"2020","title":"A Large-Scale Multi-Document Summarization Dataset from the Wikipedia Current Events Portal","authors":["D Gholipour Ghalandari, C Hokamp, J Glover, G Ifrim - arXiv, 2020","DG Ghalandari, C Hokamp, NT Pham, J Glover, G Ifrim - arXiv preprint arXiv …, 2020"],"snippet":"… We also automatically extend these source articles by looking for related articles in the Common Crawl archive … Table 1: Example event summary and linked source ar- ticles from the Wikipedia Current Events Portal, and …","url":["https://arxiv.org/pdf/2005.10070","https://ui.adsabs.harvard.edu/abs/2020arXiv200510070G/abstract"]}
|
| 24 |
{"year":"2020","title":"A Large-Scale Semi-Supervised Dataset for Offensive Language Identification","authors":["S Rosenthal, P Atanasova, G Karadzhov, M Zampieri… - arXiv preprint arXiv …, 2020"],"snippet":"… The first layer of the LSTM model is an embedding layer, which we initialize with a concatenation of the GloVe 300-dimensional (Pennington et al., 2014) and FastText's Common Crawl 300dimensional embeddings (Grave et al., 2018). The Page 5 …","url":["https://arxiv.org/pdf/2004.14454"]}
|
| 25 |
{"year":"2020","title":"A Longitudinal Analysis of Job Skills for Entry-Level Data Analysts","authors":["T Dong, J Triche - Journal of Information Systems Education, 2020"],"snippet":"… Therefore, we used the Common Crawl dataset to address this problem (http:// commoncrawl.org/). Common Crawl is a non-profit organization that builds and maintains an open repository of web crawl data that is, in essence, a copy of the Internet …","url":["http://jise.org/Volume31/n4/JISEv31n4p312.pdf"]}
|
| 26 |
+
{"year":"2020","title":"A Monolingual Approach to Contextualized Word Embeddings for Mid-Resource Languages","authors":["P Ortiz Suárez, L Romary, B Sagot - arXiv, 2020","PO Suárez, L Romary, B Sagot - arXiv preprint arXiv:2006.06202, 2020"],"snippet":"… Abstract. We use the multilingual OSCAR corpus, extracted from Common Crawl via language classification, filtering and cleaning, to train monolingual contextualized word embeddings (ELMo) for several mid-resource languages …","url":["https://arxiv.org/pdf/2006.06202","https://ui.adsabs.harvard.edu/abs/2020arXiv200606202O/abstract"]}
|
| 27 |
{"year":"2020","title":"A Multilingual Evaluation for Online Hate Speech Detection","authors":["M Corazza, S Menini, E Cabrio, S Tonelli, S Villata - ACM Transactions on Internet …, 2020"],"snippet":"… In particular, we use the Italian and German embeddings trained on Common Crawl and Wikipedia [33] with size 300 … English Fasttext Crawl embeddings: English embeddings trained by Fasttext9 on Common Crawl with an embedding size of 300 …","url":["https://dl.acm.org/doi/abs/10.1145/3377323"]}
|
| 28 |
{"year":"2020","title":"A Neural-based model to Predict the Future Natural Gas Market Price through Open-domain Event Extraction","authors":["MT Chau, D Esteves, J Lehmann"],"snippet":"… Strong baseline We feed the price and sentence embedding of filtered news using spaCy small English (Context tensor trained on [39], 300-d embedding vector) and large English model (trained on both [39] and Common Crawl …","url":["http://ceur-ws.org/Vol-2611/paper2.pdf"]}
|
| 29 |
{"year":"2020","title":"A NOVEL APPROACH FOR NAMED ENTITY RECOGNITION ON HINDI LANGUAGE USING RESIDUAL BILSTM NETWORK","authors":["R Shelke, D Thakore"],"snippet":"… It provides word embeddings for Hindi (and 157 other languages) and is based on the CBOW (Continuous Bag-of-Words) model. The CBOW model learns by predicting the current word based on its context, and it was trained …","url":["http://www.academia.edu/download/63216061/120200506-26612-102sbv8.pdf"]}
|
|
|
|
| 398 |
{"year":"2020","title":"Gender Detection on Social Networks using Ensemble Deep Learning","authors":["K Kowsari, M Heidarysafa, T Odukoya, P Potter… - arXiv preprint arXiv …, 2020"],"snippet":"… 25d, 50d, 100d, and 200d vectors. This word embedding is trained over even bigger corpora, including Wikipedia and Common Crawl content. The objective function is as follows: f(wi − wj, ˜wk) = Pik Pjk (2) where wi is refer to …","url":["https://arxiv.org/pdf/2004.06518"]}
|
| 399 |
{"year":"2020","title":"Gender stereotype reinforcement: Measuring the gender bias conveyed by ranking algorithms","authors":["A Fabris, A Purpura, G Silvello, GA Susto - Information Processing & Management, 2020"],"snippet":"… Corrado, Dean, 2013). Most frequently, they are learnt from large text corpora available online (such as Wikipedia, Google News and Common Crawl, capturing semantic relationships of words based on their usage. Recent work …","url":["https://arxiv.org/pdf/2009.01334"]}
|
| 400 |
{"year":"2020","title":"Gender stereotypes are reflected in the distributional structure of 25 languages","authors":["M Lewis, G Lupyan - Nature Human Behaviour, 2020"],"snippet":"Cultural stereotypes such as the idea that men are more suited for paid work and women are more suited for taking care of the home and family, may contribute to gender imbalances in science, technology, engineering and …","url":["https://www.nature.com/articles/s41562-020-0918-6"]}
|
| 401 |
+
{"year":"2020","title":"Generalisation of Cyberbullying Detection","authors":["K Richard, L Marc-André - arXiv preprint arXiv:2009.01046, 2020","MA Larochelle, R Khoury"],"snippet":"… messages are truncated). We use a FastText network pre-trained on Common Crawl data featuring 300 dimensions and 2 million word vectors with subword information 8 to convert the words into vector representations. These vectors …","url":["https://arxiv.org/pdf/2009.01046","https://web.ntpu.edu.tw/~myday/doc/ASONAM2020/ASONAM2020_Proceedings/pdf/papers/047_034_296.pdf"]}
|
| 402 |
{"year":"2020","title":"Generalize Sentence Representation with Self-Inference","authors":["KC Yang, HY Kao"],"snippet":"… Our model is trained with the phrases in the parse trees and tested on the whole sentence. Experimental Settings We initialize word embeddings using the pretrained FastText common-crawl vectors (Mikolov et al. 2018) and freeze the weights during training …","url":["https://www.aaai.org/Papers/AAAI/2020GB/AAAI-YangKC.7098.pdf"]}
|
| 403 |
{"year":"2020","title":"Generating Categories for Sets of Entities","authors":["S Zhang, K Balog, J Callan - arXiv preprint arXiv:2008.08428, 2020"],"snippet":"… entity linking for tables and table schema to predicate matching. Ritze et al. [31] propose an iterative method for matching tables to DBpedia. They develop a manually annotated dataset for matching between a Web table corpus …","url":["https://arxiv.org/pdf/2008.08428"]}
|
| 404 |
{"year":"2020","title":"Generating Diverse Conversation Responses by Creating and Ranking Multiple Candidates","authors":["YP Ruan, ZH Ling, X Zhu, Q Liu, JC Gu - Computer Speech & Language, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0885230820300048"]}
|
|
|
|
| 493 |
{"year":"2020","title":"Introduction to Cloud Computing and Amazon Web Services (AWS)","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… 5 examples. IAM and S3 sections are necessary for Chapters 6 and 7 since we will be using data compiled by a nonprofit called common crawl which is only publicly available on S3 through AWS open registry. You will have …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_3"]}
|
| 494 |
{"year":"2020","title":"Introduction to Common Crawl Datasets","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"The Common Crawl Foundation (https://commoncrawl.org/) is a 501(c)(3) nonprofit involved in providing open access web crawl data going back to over eight years. They perform monthly web crawls which cover over 25 billion pages for each month. This …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_6"]}
|
| 495 |
{"year":"2020","title":"Introduction to Web Scraping","authors":["JM Patel - Getting Structured Data from the Internet, 2020"],"snippet":"… We will introduce natural language processing algorithms in Chapter 4, and we will put them into action in Chapters 6 and 7 on a Common Crawl dataset. The next step is loading the cleaned data from the preceding step into an appropriate database …","url":["https://link.springer.com/chapter/10.1007/978-1-4842-6576-5_1"]}
|
| 496 |
+
{"year":"2020","title":"Is Everything Fine, Grandma? Acoustic and Linguistic Modeling for Robust Elderly Speech Emotion Recognition","authors":["G Sogancıoglu, O Verkholyak, H Kaya, D Fedotov… - INTERSPEECH, Shanghai …, 2020","G Soğancıoğlu, O Verkholyak, H Kaya, D Fedotov… - arXiv preprint arXiv …, 2020"],"snippet":"… negative scores. For the SentiWordNet representation, an input text is tokenized ignoring the punctuation, and each token is looked up ac- 1https://cloud.google.com/ translate 2http://commoncrawl.org/ Page 3. cording to its POS. It …","url":["https://arxiv.org/pdf/2009.03432","https://indico2.conference4me.psnc.pl/event/35/contributions/3140/attachments/1218/1261/Wed-SS-1-4-12.pdf"]}
|
| 497 |
{"year":"2020","title":"Is language modeling enough? Evaluating effective embedding combinations","authors":["R Schneider, T Oberhauser, P Grundmann, FA Gers… - 2020"],"snippet":"… 2.1. Universal Text Embeddings Recently, researchers explore universal text embeddings trained on extensive Web corpora, such as the Common Crawl6 (Mikolov et al., 2018; Radford et al., 2019), the billion … 5https …","url":["https://eprints.soton.ac.uk/438613/1/LREC20_LM_TM_27_1_.pdf"]}
|
| 498 |
{"year":"2020","title":"Is MAP Decoding All You Need? The Inadequacy of the Mode in Neural Machine Translation","authors":["B Eikema, W Aziz - arXiv preprint arXiv:2005.10283, 2020"],"snippet":"… For English-Nepali we also use a translated version of the Penn Treebank4 and for English-Sinhala we additionally use Open Subtitles (Lison et al., 2018). We use a filtered crawl of Wikipedia and Common Crawl released in Guzmán et al …","url":["https://arxiv.org/pdf/2005.10283"]}
|
| 499 |
{"year":"2020","title":"Is Wikipedia succeeding in reducing gender bias? Assessing changes in gender bias in Wikipedia using word embeddings","authors":["KG Schmahl, TJ Viering, S Makrodimitris, AN Jahfari… - Proceedings of the Fourth …, 2020"],"snippet":"… These categories have shown significant bias towards male or female words in embeddings from Google News corpora [Mikolov et al., 2013a], Google Books [Jones et al., 2020], as well as a 'Common Crawl' corpus [Caliskan et al., 2017] …","url":["https://www.aclweb.org/anthology/2020.nlpcss-1.11.pdf"]}
|
|
|
|
| 501 |
{"year":"2020","title":"Italian Transformers Under the Linguistic Lens","authors":["A Miaschip, G Sartim, D Brunato, F Dell'Orletta… - Proceedings of the Seventh …, 2020"],"snippet":"… For instance, we can notice that, for both the probing models, features related to the distribution of syntactic relations (SyntacticDep) are better predicted by GePpeTto, while GilBERTo and UmBERTo-Commoncrawl are the best …","url":["http://ceur-ws.org/Vol-2769/paper_56.pdf"]}
|
| 502 |
{"year":"2020","title":"JASS: Japanese-specific Sequence to Sequence Pre-training for Neural Machine Translation","authors":["Z Mao, F Cromieres, R Dabre, H Song, S Kurohashi - arXiv preprint arXiv:2005.03361, 2020"],"snippet":"… Mono Ja Common Crawl 22M En News Crawl 22M Ru News Crawl 22M … 5.1.2. Monolingual data We use monolingual data containing 22M Japanese, 22M English and 22M Russian sentences randomly sub-sampled from Common Crawl dataset and News crawl4 dataset …","url":["https://arxiv.org/pdf/2005.03361"]}
|
| 503 |
{"year":"2020","title":"Joint Multiclass Debiasing of Word Embeddings","authors":["R Popović, F Lemmerich, M Strohmaier - arXiv preprint arXiv:2003.11520, 2020"],"snippet":"… As in previous studies [7], evaluation was done on three pretrained Word Embedding models with vector dimension of 300: FastText2(English we- bcrawl and Wikipedia, 2 million words), GloVe3(Common Crawl, Wikipedia …","url":["https://arxiv.org/pdf/2003.11520"]}
|
| 504 |
+
{"year":"2020","title":"Joint translation and unit conversion for end-to-end localization","authors":["G Dinu, P Mathur, M Federico, S Lauly, Y Al-Onaizan - arXiv preprint arXiv …, 2020","GDPMMFSL YaserAl-Onaizan, AWS Amazon"],"snippet":"… Europarl (Koehn, 2005) and news commentary data from WMT En→De shared task 2019 totalling 2.2 million sentences.2 Standard translation test sets do not have, however, enough examples of unit conversions and in fact …","url":["https://arxiv.org/pdf/2004.05219","https://assets.amazon.science/b2/a7/e1ada6104b3587401b30ccc8637a/joint-translation-and-unit-conversion-for-end-to-end-localization.pdf"]}
|
| 505 |
{"year":"2020","title":"KBPearl: a knowledge base population system supported by joint entity and relation linking","authors":["X Lin, H Li, H Xin, Z Li, L Chen - Proceedings of the VLDB Endowment, 2020"],"snippet":"Page 1. KBPearl: A Knowledge Base Population System Supported by Joint Entity and Relation Linking Xueling Lin, Haoyang Li, Hao Xin, Zijian Li, Lei Chen Department of Computer Science and Engineering The Hong Kong …","url":["https://dl.acm.org/doi/pdf/10.14778/3384345.3384352"]}
|
| 506 |
{"year":"2020","title":"Keeping Models Consistent between Pretraining and Translation for Low-Resource Neural Machine Translation","authors":["W Zhang, X Li, Y Yang, R Dong, G Luo - Future Internet, 2020"],"snippet":"Recently, the pretraining of models has been successfully applied to unsupervised and semi-supervised neural machine translation. A cross-lingual language model uses a pretrained masked language model to initialize the …","url":["https://www.mdpi.com/1999-5903/12/12/215/pdf"]}
|
| 507 |
{"year":"2020","title":"Kernel compositional embedding and its application in linguistic structured data classification","authors":["H Ganji, MM Ebadzadeh, S Khadivi - Knowledge-Based Systems, 2020"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0950705120300460"]}
|
|
|
|
| 544 |
{"year":"2020","title":"Leveraging Structured Metadata for Improving Question Answering on the Web","authors":["X Du, A Hassan, A Fourney, R Sim, P Bennett… - … of the 1st Conference of the …, 2020"],"snippet":"… website content. The Web Data Commons project (Mühleisen and Bizer, 2012) estimates that 0.9 billion HTML pages out of the 2.5 billion pages (37.1%) in the Common Crawl web corpus1 contain structured metadata. Figure …","url":["https://www.aclweb.org/anthology/2020.aacl-main.55.pdf"]}
|
| 545 |
{"year":"2020","title":"LIG-Health at Adhoc and Spoken IR Consumer Health Search: expanding queries using UMLS and FastText.","authors":["P Mulhem, GG Saez, A Mannion, D Schwab, J Frej - Conference and Labs of the …, 2020"],"snippet":"… The FastText embedding vector of a word is the sum of the vectors of its component ngrams. We used the pre-trained word vectors for English language, trained on Common Crawl and Wikipedia using FastText. The features of the model used are as follows; …","url":["http://www.dei.unipd.it/~ferro/CLEF-WN-Drafts/CLEF2020/paper_129.pdf"]}
|
| 546 |
{"year":"2020","title":"LIMSI@ WMT 2020","authors":["SA Rauf, JC Rosales, I Paris, PM Quang, S Paris…"],"snippet":"… Domain Corpus sents. words words (en) (de) web Paracrawl 50,875 978 919 economy Tilde EESC 2,858 61 58 news Commoncrawl 2,399 51 47 Tilde rapid 940 20 19 News commentary 361 8 8 tourism Tilde tourism 7 …","url":["http://statmt.org/wmt20/pdf/2020.wmt-1.86.pdf"]}
|
| 547 |
+
{"year":"2020","title":"Linguistic Structure Guided Context Modeling for Referring Image Segmentation","authors":["F Zhang, J Han","T Hui, S Liu, S Huang, G Li, S Yu, F Zhang, J Han"],"snippet":"… rate. CNN is fixed during training. We use batch size 1 and stop training after 700K iterations. GloVe word embeddings [30] pretrained on Common Crawl with 840B tokens are used to replace randomly initialized ones. For …","url":["http://colalab.org/media/paper/Linguistic_Structure_Guided_Context_Modeling_for_Referring_Image_Segmentation.pdf","https://link.springer.com/content/pdf/10.1007/978-3-030-58607-2_4.pdf"]}
|
| 548 |
{"year":"2020","title":"Linguistically-aware Attention for Reducing the Semantic-Gap in Vision-Language Tasks","authors":["G KV, A Nambiar, KS Srinivas, A Mittal - arXiv preprint arXiv:2008.08012, 2020"],"snippet":"… The pre-trained word-to-vector networks such as Glove [29] and Bert [30] are inexpensive and rich in making linguistic correlations (since they are already trained on a large textual corpus such as Common Crawl and Wikipedia2014) …","url":["https://arxiv.org/pdf/2008.08012"]}
|
| 549 |
{"year":"2020","title":"LNMap: Departures from Isomorphic Assumption in Bilingual Lexicon Induction Through Non-Linear Mapping in Latent Space","authors":["T Mohiuddin, MS Bari, S Joty - arXiv preprint arXiv:2004.13889, 2020"],"snippet":"… English, Italian, and German em- beddings were trained on WacKy crawling corpora using CBOW (Mikolov et al., 2013b), while Spanish and Finnish embeddings were trained on WMT News Crawl and Common Crawl, respectively. 4.2 Baseline Methods …","url":["https://arxiv.org/pdf/2004.13889"]}
|
| 550 |
{"year":"2020","title":"Localizing Open-Ontology QA Semantic Parsers in a Day Using Machine Translation","authors":["M Moradshahi, G Campagna, SJ Semnani, S Xu… - arXiv preprint arXiv …, 2020"],"snippet":"Page 1. Localizing Open-Ontology QA Semantic Parsers in a Day Using Machine Translation Mehrad Moradshahi Giovanni Campagna Sina J. Semnani Silei Xu Monica S. Lam Computer Science Department Stanford University …","url":["https://arxiv.org/pdf/2010.05106"]}
|
|
|
|
| 658 |
{"year":"2020","title":"On the Language Neutrality of Pre-trained Multilingual Representations","authors":["J Libovický, R Rosa, A Fraser - arXiv preprint arXiv:2004.05160, 2020"],"snippet":"… XLM-RoBERTa. Conneau et al. (2019) claim that the original mBERT is under-trained and train a similar model on a larger dataset that consists of two terabytes of plain text extracted from CommonCrawl (Wenzek et al., 2019) …","url":["https://arxiv.org/pdf/2004.05160"]}
|
| 659 |
{"year":"2020","title":"On the Persistence of Persistent Identifiers of the Scholarly Web","authors":["M Klein, L Balakireva - arXiv preprint arXiv:2004.03011, 2020"],"snippet":"… These findings were confirmed in a large scale study by Thompson and Jian [16] based on two samples of the web taken from Common Crawl6 datasets … Thompson, HS, Tong, J.: Can common crawl reliably track persistent identifier (PID) use over time …","url":["https://arxiv.org/pdf/2004.03011"]}
|
| 660 |
{"year":"2020","title":"On the synthesis of metadata tags for HTML files","authors":["P Jiménez, JC Roldán, FO Gallego, R Corchuelo - Software: Practice and Experience"],"snippet":"… Recently, an analysis of the 32.04 million domains in the November 2019 Common Crawl has revealed that only 11.92 million domains provide metadata tags,1 which clearly argues for a method that helps software agents deal with the documents provided by the remaining …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/spe.2886"]}
|
| 661 |
+
{"year":"2020","title":"On using Product-Specific Schema. org from Web Data Commons: An Empirical Set of Best Practices","authors":["R Kiran Selvam, M Kejriwal - arXiv e-prints, 2020","RK Selvam, M Kejriwal - arXiv preprint arXiv:2007.13829, 2020"],"snippet":"… on e-commerce websites. The Web Data Commons (WDC) project has extracted schema.org data at scale from webpages in the Common Crawl and made it available as an RDF `knowledge graph' at scale. The portion of this …","url":["https://arxiv.org/pdf/2007.13829","https://ui.adsabs.harvard.edu/abs/2020arXiv200713829K/abstract"]}
|
| 662 |
{"year":"2020","title":"On-The-Fly Information Retrieval Augmentation for Language Models","authors":["H Wang, D McAllester - Proceedings of the First Joint Workshop on Narrative …, 2020"],"snippet":"… News etc. For language modelling we use the NY Times portion because it is written by native English speakers. Since GPT 2.0 is trained on Common Crawl which contains news collections started from 2008. To avoid testing …","url":["https://www.aclweb.org/anthology/2020.nuse-1.14.pdf"]}
|
| 663 |
{"year":"2020","title":"One Belt, One Road, One Sentiment? A Hybrid Approach to Gauging Public Opinions on the New Silk Road Initiative","authors":["JK Chandra, E Cambria, A Nanetti"],"snippet":"… ABSA. We used the Common Crawl GloVe version [44], a pre-trained 300-dimension vector representation database of 840 billion tokens and 2.2 million vocabulary, to convert our preprocessed tweets into word embeddings …","url":["https://sentic.net/one-belt-one-road-one-sentiment.pdf"]}
|
| 664 |
{"year":"2020","title":"Open Information Extraction as Additional Source for Kazakh Ontology Generation","authors":["N Khairova, S Petrasova, O Mamyrbayev, K Mukhsina - Asian Conference on …, 2020"],"snippet":"… also for many others. For example, an experiment was conducted in [19] for assessing the adequacy of measuring the factual density of 50 randomly selected Spanish documents in the CommonCrawl corpus. In a recent study …","url":["https://link.springer.com/chapter/10.1007/978-3-030-41964-6_8"]}
|
|
|
|
| 718 |
{"year":"2020","title":"Question Answering When Knowledge Bases are Incomplete","authors":["C Pradel, D Sileo, Á Rodrigo, A Peñas, E Agirre - International Conference of the …, 2020","E Agirre - … IR Meets Multilinguality, Multimodality, and Interaction …"],"snippet":"… with bag of word embeddings. We use FastText CommonCrawl word embeddings [10] 4 and a max pooling to produce the continuous bag of word representations of table columns and the question text. The column bag of words …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=IxP9DwAAQBAJ&oi=fnd&pg=PA43&dq=commoncrawl&ots=BCbV87DfTS&sig=kVIo_AYLn9xgMPpxB-rDuk1jzEg","https://link.springer.com/chapter/10.1007/978-3-030-58219-7_4"]}
|
| 719 |
{"year":"2020","title":"Question Type Classification Methods Comparison","authors":["T Seidakhmetov - arXiv preprint arXiv:2001.00571, 2020"],"snippet":"… The GLoVe vectors were pre-trained using 840 billion tokens from Common Crawl, and each token is mapped into a 300-dimensional vector [3]. Xembeddings = GloveEmbedding( Xword) ∈ RNxDword where Dword is a number of dimensions of a word vector …","url":["https://arxiv.org/pdf/2001.00571"]}
|
| 720 |
{"year":"2020","title":"Questioning the Use of Bilingual Lexicon Induction as an Evaluation Task for Bilingual Word Embeddings","authors":["B Marie, A Fujita"],"snippet":"… gual word embeddings. In fact, this corpus was significantly smaller than the Wikipedia corpora for all the other languages, and than the Finnish Common Crawl corpus used to train Finnish Vecmap-emb. Another finding is …","url":["https://www.anlp.jp/proceedings/annual_meeting/2020/pdf_dir/P5-14.pdf"]}
|
| 721 |
+
{"year":"2020","title":"RealToxicityPrompts: Evaluating Neural Toxic Degeneration in Language Models","authors":["S Gehman, S Gururangan, M Sap, Y Choi, NA Smith - arXiv preprint arXiv …, 2020","SGS Gururangan, MSY Choi, NA Smith"],"snippet":"… GPT-2 (specifically, GPT-2-small; Radford et al., 2019), is a similarly sized model pretrained on OPENAI-WT, which contains 40GB of English web text and is described in §6.7 GPT-3 (Brown et al., 2020) is pretrained on a mix …","url":["https://arxiv.org/pdf/2009.11462","https://homes.cs.washington.edu/~msap/pdfs/gehman2020realtoxicityprompts.pdf"]}
|
| 722 |
{"year":"2020","title":"Recent Trends in the Use of Deep Learning Models for Grammar Error Handling","authors":["M Naghshnejad, T Joshi, VN Nair - arXiv preprint arXiv:2009.02358, 2020"],"snippet":"Page 1. 1 Recent Trends in the Use of Deep Learning Models for Grammar Error Handling Mina Naghshnejad1, Tarun Joshi, and Vijayan N. Nair Corporate Model Risk, Wells Fargo2 Abstract Grammar error handling (GEH) is …","url":["https://arxiv.org/pdf/2009.02358"]}
|
| 723 |
{"year":"2020","title":"Recipes for Adapting Pre-trained Monolingual and Multilingual Models to Machine Translation","authors":["AC Stickland, X Li, M Ghazvininejad - arXiv preprint arXiv:2004.14911, 2020"],"snippet":"Page 1. Recipes for Adapting Pre-trained Monolingual and Multilingual Models to Machine Translation Asa Cooper Stickland♣ Xian Li♠ ♣ University of Edinburgh, ♠ Facebook AI [email protected], {xianl,ghazvini}@fb.com Marjan Ghazvininejad♠ Abstract …","url":["https://arxiv.org/pdf/2004.14911"]}
|
| 724 |
{"year":"2020","title":"ReClor: A Reading Comprehension Dataset Requiring Logical Reasoning","authors":["W Yu, Z Jiang, Y Dong, J Feng - arXiv preprint arXiv:2002.04326, 2020"],"snippet":"Page 1. Published as a conference paper at ICLR 2020 RECLOR: AREADING COMPREHENSION DATASET REQUIRING LOGICAL REASONING Weihao Yu∗, Zihang Jiang∗, Yanfei Dong & Jiashi Feng National University …","url":["https://arxiv.org/pdf/2002.04326"]}
|
|
|
|
| 800 |
{"year":"2020","title":"Sociolinguistic Properties of Word Embeddings","authors":["A Arseniev-Koehler, JG Foster - SocArXiv. August, 2020"],"snippet":"… These studies use large, commonly available pre-trained embeddings or their training corpora, such as Google News, web data (Common Crawl), and Google Books … They replicated results using a pretrained model on Common Crawl data …","url":["https://osf.io/b8kud/download"]}
|
| 801 |
{"year":"2020","title":"Software for creating and analyzing semantic representations","authors":["FÅ Nielsen, LK Hansen - Statistical Semantics, 2020"],"snippet":"… This package provides models for the tagger, parser, named-entity recognizer and distributional semantic vectors trained on OntoNotes Release 5 and the Common Crawl dataset … 10 K–50 K. 300. 29 languages. GloVe. Common …","url":["https://link.springer.com/chapter/10.1007/978-3-030-37250-7_3"]}
|
| 802 |
{"year":"2020","title":"Spoken words as biomarkers: using machine learning to gain insight into communication as a predictor of anxiety","authors":["G Demiris, KL Corey Magan, D Parker Oliver… - Journal of the American …, 2020"],"snippet":"… The validity of using cosine distance in an embedding space to measure text similarity depends largely on how well the embedding space represents the semantic concepts present in the text. In our case, the word embeddings …","url":["https://academic.oup.com/jamia/advance-article-abstract/doi/10.1093/jamia/ocaa049/5831105"]}
|
| 803 |
+
{"year":"2020","title":"Spontaneous Stereotype Content: Measurement Aiming Toward Theoretical Integration and Discovery","authors":["G Nicolas Ferreira - 2020","GN Ferreira - 2020"],"snippet":"Page 1. SPONTANEOUS STEREOTYPE CONTENT: MEASUREMENT AIMING TOWARD THEORETICAL INTEGRATION AND DISCOVERY GANDALF NICOLAS FERREIRA A DISSERTATION PRESENTED TO THE FACULTY OF PRINCETON UNIVERSITY IN …","url":["http://search.proquest.com/openview/41d33da8e87d459690442733f719668f/1?pq-origsite=gscholar&cbl=18750&diss=y","https://dataspace.princeton.edu/bitstream/88435/dsp01zp38wg55d/1/NicolasFerreira_princeton_0181D_13366.pdf"]}
|
| 804 |
{"year":"2020","title":"Stanza: A Python Natural Language Processing Toolkit for Many Human Languages","authors":["P Qi, Y Zhang, Y Zhang, J Bolton, CD Manning - arXiv preprint arXiv:2003.07082, 2020"],"snippet":"… For the character-level language models in the NER component, we pretrained them on a mix of the Common Crawl and Wikipedia dumps, and the news corpora released by the WMT19 Shared Task (Barrault et al., 2019), with …","url":["https://arxiv.org/pdf/2003.07082"]}
|
| 805 |
{"year":"2020","title":"STIL--Simultaneous Slot Filling, Translation, Intent Classification, and Language Identification: Initial Results using mBART on MultiATIS++","authors":["JGM FitzGerald - arXiv preprint arXiv:2010.00760, 2020"],"snippet":"… The mBART.cc25 model was trained on 25 languages for 500k steps using a 1.4 TB corpus of scraped website data taken from Common Crawl (Wenzek et al., 2019). The model was trained to reconstruct masked tokens and to rearrange scrambled sentences …","url":["https://arxiv.org/pdf/2010.00760"]}
|
| 806 |
{"year":"2020","title":"STILTool: A Semantic Table Interpretation evaLuation Tool","authors":["E Jimenez-Ruiz, A Maurino - The Semantic Web: ESWC 2020 Satellite Events …","M Cremaschi, A Siano, R Avogadro, E Jimenez-Ruiz…"],"snippet":"… In order to size the spread of tabular data, 2.5 M tables have been identified within the Common Crawl repository1 [3]. The current snapshot of Wikipedia contains more than 3.23 M tables from more than 520k Wikipedia articles …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=C0UIEAAAQBAJ&oi=fnd&pg=PA61&dq=commoncrawl&ots=OcUKD8orbe&sig=5EUZjTQOLRGuwqaWXWRmrck1S50","https://preprints.2020.eswc-conferences.org/posters_demos/paper_293.pdf"]}
|
|
|
|
| 808 |
{"year":"2020","title":"Study and Creation of Datasets for Comparative Questions Classification","authors":["S Stahlhacke"],"snippet":"… The data used by the system is a preprocessed version of the Common Crawl Text Corpus8, which crawled from the world wide web … Which one is better suited for me, Xbox One or PS4? 8https://commoncrawl.org/ 4 Page 11. CHAPTER 1. INTRODUCTION …","url":["https://www.inf.uni-hamburg.de/en/inst/ab/lt/teaching/theses/completed-theses/2020-ma-stahlhacke.pdf"]}
|
| 809 |
{"year":"2020","title":"Studying the Evolution of Greek Words via Word Embeddings","authors":["V Barzokas, E Papagiannopoulou, G Tsoumakas - 11th Hellenic Conference on …, 2020"],"snippet":"… Despite the limited size of the Greek corpus compared to Common Crawl and Wikipedia used for the pre-trained fastText embeddings, we didn't detect any notable difference in the quality of our models in comparison with the pre-trained one …","url":["https://dl.acm.org/doi/abs/10.1145/3411408.3411425"]}
|
| 810 |
{"year":"2020","title":"Substance over Style: Document-Level Targeted Content Transfer","authors":["A Hegel, S Rao, A Celikyilmaz, B Dolan - arXiv preprint arXiv:2010.08618, 2020"],"snippet":"Page 1. Substance over Style: Document-Level Targeted Content Transfer Allison Hegel1∗ Sudha Rao2 Asli Celikyilmaz2 Bill Dolan2 1Lexion, Seattle, WA, USA 2Microsoft Research, Redmond, WA, USA [email protected] {sudhra,aslicel,billdol}@microsoft.com Abstract …","url":["https://arxiv.org/pdf/2010.08618"]}
|
| 811 |
+
{"year":"2020","title":"Subword Segmentation and a Single Bridge Language Affect Zero-Shot Neural Machine Translation","authors":["A Rios, M Müller, R Sennrich - arXiv preprint arXiv:2011.01703, 2020","AR Gonzales, M Müller, R Sennrich - Proceedings of the Fifth Conference on …, 2020"],"snippet":"… Page 3. corpora training dev test Language Pairs with English: de↔en Commoncrawl, Europarl-v9, Wikititles-v1 5M 250 2000 cs↔en Europarl-v9, CzEng1.7 5M 250 2000 fr↔en Commoncrawl, Europarl-v7 …","url":["https://arxiv.org/pdf/2011.01703","https://www.aclweb.org/anthology/2020.wmt-1.64.pdf"]}
|
| 812 |
{"year":"2020","title":"Suggesting Citations for Wikidata Claims based on Wikipedia's External References","authors":["P Curotto, A Hogan"],"snippet":"… Offline: Given that some Wikidata items do not have an associated Wikipedia article, that many Wikipedia articles have few references, etc., it would be interesting to develop a broader corpus with more documents from the Web, perhaps from the Common Crawl …","url":["http://aidanhogan.com/docs/wikidata-references.pdf"]}
|
| 813 |
{"year":"2020","title":"Supervised Understanding of Word Embeddings","authors":["HZ Yerebakan, P Bhatia, Y Shinagawa"],"snippet":"… In our experiments, we have used scikit-learn linear logistic regression model with a positive class weight of 2 to enhance the effect of positive words. We have used top 250k words of Fasttext Common Crawl word …","url":["https://rcqa-ws.github.io/papers/paper8.pdf"]}
|
| 814 |
{"year":"2020","title":"Surface pattern-enhanced relation extraction with global constraints","authors":["H Jiang, JT Liu, S Zhang, D Yang, Y Xiao, W Wang - Knowledge and Information …, 2020"],"snippet":"Relation extraction is one of the most important tasks in information extraction. The traditional works either use sentences or surface patterns (ie, the.","url":["https://link.springer.com/article/10.1007/s10115-020-01502-y"]}
|
|
|
|
| 835 |
{"year":"2020","title":"Text-based classification of interviews for mental health--juxtaposing the state of the art","authors":["JV Wouts - arXiv preprint arXiv:2008.01543, 2020"],"snippet":"… Model name Pretrain corpus Tokenizer type Acc Sentiment analysis belabBERT Common Crawl Dutch (non-shuffled) BytePairEncoding 95.92∗ % RobBERT Common Crawl Dutch (shuffled) BytePairEncoding 94.42 …","url":["https://arxiv.org/pdf/2008.01543"]}
|
| 836 |
{"year":"2020","title":"TextSETTR: Label-Free Text Style Extraction and Tunable Targeted Restyling","authors":["P Riley, N Constant, M Guo, G Kumar, D Uthus… - arXiv preprint arXiv …, 2020"],"snippet":"… Furthermore, we demonstrate that a single model trained on unlabeled Common Crawl data is capable of transferring along multiple dimensions including dialect, emotiveness, formality, politeness, and sentiment. 1 INTRODUCTION …","url":["https://arxiv.org/pdf/2010.03802"]}
|
| 837 |
{"year":"2020","title":"TF-CR: Weighting Embeddings for Text Classification","authors":["A Zubiaga - arXiv preprint arXiv:2012.06606, 2020"],"snippet":"… Page 6. • cglove: GloVe embeddings trained from Common Crawl. • wglove: GloVe embeddings trained from Wikipedia.6 We use two different classifiers for these experiments, SVM and Logistic Regression, which are known …","url":["https://arxiv.org/pdf/2012.06606"]}
|
| 838 |
+
{"year":"2020","title":"The 2019 BBN Cross-lingual Information Retrieval System","authors":["DK Le Zhang, W Hartmann, M Srivastava, L Tarlin… - LREC 2020 Language Resources …","L Zhang, D Karakos, W Hartmann, M Srivastava… - Proceedings of the …, 2020"],"snippet":"… The neural MT models were trained on both versions of the data together, in a single “multi-style” fashion, to handle both text and ASR transcript as input. This was however not done for the phrase-based model described …","url":["http://www.lrec-conf.org/proceedings/lrec2020/workshops/CLSSTS2020/CLSSTS-2020.pdf#page=49","https://www.aclweb.org/anthology/2020.clssts-1.8.pdf"]}
|
| 839 |
{"year":"2020","title":"The 2020 bilingual, bi-directional webnlg+ shared task overview and evaluation results (webnlg+ 2020)","authors":["TC Ferreira, C Gardent, C van der Lee, N Ilinykh… - Proceedings of the 3rd …, 2020"],"snippet":"… 3.3 Mono-task, Bilingual Approaches cuni-ufal. The mBART model (Liu et al., 2020) is pre-trained for multilingual denoising on the large-scale multilingual CC25 corpus extracted from Common Crawl, which contains …","url":["https://webnlg-challenge.loria.fr/files/2020.webnlg-papers.7.pdf"]}
|
| 840 |
{"year":"2020","title":"THE ABILITY OF WORD EMBEDDINGS TO CAPTURE WORD SIMILARITIES","authors":["M Toshevska, F Stojanovska, J Kalajdjieski"],"snippet":"… architectures [25]. In our experiments, we have used pre-trained models both trained with subword information on Wikipedia 2017 (16B tokens) and trained with subword information on Common Crawl (600B tokens)4. 2https …","url":["http://www.academia.edu/download/63915170/120200714-10552-nn915u.pdf"]}
|
| 841 |
{"year":"2020","title":"The ADAPT Centre's neural MT systems for the WAT 2020 document-level translation task","authors":["W Jooste, R Haque, A Way - 2020"],"snippet":"… Finally, source-language monolingual data with n-grams similar to that of the documents in the test set was mined from the Common Crawl Corpus6 to be used as a source-side original synthetic corpus (SOSC) for fine-tuning the NMT model parameters …","url":["http://doras.dcu.ie/25205/1/WAT_2020.pdf"]}
|
2021.jsonl
CHANGED
|
@@ -132,7 +132,7 @@
|
|
| 132 |
{"year":"2021","title":"Authorship Weightage Algorithm for Academic publications: A new calculation and ACES webserver for determining expertise","authors":["WL Wu, O Tan, KF Chan, NB Ong, D Gunasegaran… - Methods and Protocols, 2021"],"snippet":"… the back-end server. These word vectors were trained on Common Crawl (https://commoncrawl.org (last accessed on 28 April 2021)) using fastText [17], and are used to map the processed query to its corresponding values …","url":["https://www.mdpi.com/2409-9279/4/2/41/pdf"]}
|
| 133 |
{"year":"2021","title":"Automated Change Detection in Privacy Policies","authors":["A Adhikari - 2020"],"snippet":"Page 1. University of Denver Digital Commons @ DU Electronic Theses and Dissertations Graduate Studies 2020 Automated Change Detection in Privacy Policies Andrick Adhikari Follow this and additional works at: https://digitalcommons.du.edu/etd …","url":["https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=2702&context=etd"]}
|
| 134 |
{"year":"2021","title":"Automated essay scoring: A review of the field","authors":["P Lagakis, S Demetriadis - … International Conference on Computer, Information and …, 2021"],"snippet":"… Transformer models make use of those huge datasets of existing general text data, such as Wikipedia Corpus and Common Crawl, to pretrain multilayer neural networks with context-sensitive meaning of, and relations between, words, such as …","url":["https://ieeexplore.ieee.org/abstract/document/9618476/"]}
|
| 135 |
-
{"year":"2021","title":"Automated Grading of Exam Responses: An Extensive Classification Benchmark","authors":["A Farazouli, Z Lee, P Papapetrou, U Fors - … Science: 24th International Conference, DS 2021 …","J Ljungman, V Lislevand, J Pavlopoulos, A Farazouli… - International Conference on …, 2021"],"snippet":"… This method proves that training BERT with alternative design choices and with more data, including the CommonCrawl News dataset,
|
| 136 |
{"year":"2021","title":"Automated identification of bias inducing words in news articles using linguistic and context-oriented features","authors":["T Spinde, L Rudnitckaia, J Mitrović, F Hamborg… - Information Processing & …, 2021"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0306457321000157"]}
|
| 137 |
{"year":"2021","title":"Automated methods for Question-Answering in Icelandic","authors":["V Snæbjarnarson"],"snippet":"… The source of the data is the open internet, made accessible to those with relatively modest computing resources and disk storage through the targeted use of the Common Crawl datasets that comprise petabytes of data. Prior work has focused on the …","url":["https://vesteinn.is/thesis_150921.pdf"]}
|
| 138 |
{"year":"2021","title":"Automatic Detection of Fake","authors":["BM Bažík"],"snippet":"… For the data, they created the RealNews dataset, a large corpus of news articles from Common Crawl1. Fake News Detection Using Deep Learning Techniques [11] compared Logistic Regression (LR), Naive Bayes (NB) …","url":["https://is.muni.cz/th/hk1px/Martin_Bazik_master_thesis.pdf"]}
|
|
@@ -233,7 +233,7 @@
|
|
| 233 |
{"year":"2021","title":"CopyCat: Near-Duplicates within and between the ClueWeb and the Common Crawl","authors":["M Fröbe, J Bevendorff, L Gienapp, M Völske, B Stein… - 2021"],"snippet":"The amount of near-duplicates in web crawls like the ClueWeb or Common Crawl demands from their users either to develop a preprocessing pipeline for deduplication, which is costly both computationally and in person hours, or accepting …","url":["https://webis.de/downloads/publications/papers/froebe_2021.pdf"]}
|
| 234 |
{"year":"2021","title":"Corpulyzer: A Novel Framework for Building Low Resource Language Corpora","authors":["B Tahir, MA Mehmood - IEEE Access"],"snippet":"… Leveraging dataset from Common Crawl Corpus (CCC), first, we prepare a list of seed URLs by filtering the Urdu language webpages … INDEX TERMS Common crawl, web crawling, text corpus, corpus analysis, regional languages corpora …","url":["https://ieeexplore.ieee.org/iel7/6287639/9312710/09316706.pdf"]}
|
| 235 |
{"year":"2021","title":"Corpus and Baseline Model for Domain-Specific Entity Recognition in German","authors":["S Torge, W Hahn, R Jäkel, WE Nagel - 2020 6th IEEE Congress on Information …, 2021"],"snippet":"… Deepset AI4 FastText Wikipedia 100 Deepset AI GloVe Wikipedia 300 FastText [26] FastText Wikipedia, 300 Common Crawl Kyubyong5 Word2Vec Wikipedia 300 SB Tweets s[27] FastText 50 Mio Tweets 100 SB Tweets l[27] FastText 50 Mio Tweets 300 …","url":["https://ieeexplore.ieee.org/abstract/document/9357189/"]}
|
| 236 |
-
{"year":"2021","title":"COVIDSenti: A Large-Scale Benchmark Twitter Data Set for COVID-19 Sentiment Analysis","authors":["P BALAJI","U Naseem, I Razzak, M Khushi, PW Eklund, J Kim - IEEE Transactions on …, 2021"],"snippet":"… Similarly, for word embeddings, pretrained Word2Vec, GloVe, and fastText embeddings trained on Common Crawl and Wikipedia are used and have
|
| 237 |
{"year":"2021","title":"CRASS: A Novel Data Set and Benchmark to Test Counterfactual Reasoning of Large Language Models","authors":["J Frohberg, F Binder - arXiv preprint arXiv:2112.11941, 2021"],"snippet":"We introduce the CRASS (counterfactual reasoning assessment) data set and benchmark utilizing questionized counterfactual conditionals as a novel and powerful tool to evaluate large language models. We present the data set design …","url":["https://arxiv.org/pdf/2112.11941"]}
|
| 238 |
{"year":"2021","title":"Creation, Enrichment and Application of Knowledge Graphs","authors":["S Gottschalk - 2021"],"snippet":"Page 1. CREATION, ENRICHMENT AND APPLICATION OF KNOWLEDGE GRAPHS Von der Fakultät für Elektrotechnik und Informatik der Gottfried Wilhelm Leibniz Universität Hannover zur Erlangung des Grades DOKTOR …","url":["https://www.repo.uni-hannover.de/bitstream/handle/123456789/11125/phd_thesis_gottschalk.pdf?sequence=1"]}
|
| 239 |
{"year":"2021","title":"CrisisBench: Benchmarking Crisis-related Social Media Datasets for Humanitarian Information Processing","authors":["F Alam, H Sajjad, M Imran, F Ofli - 2021"],"snippet":"… 2017). fastText: For the fastText (Joulin et al. 2017), we used pretrained embeddings trained on Common Crawl, which is re- leased by fastText for English. Transformer models: Pre-trained models have achieved …","url":["https://mimran.me/papers/CrisisBench_Benchmarking_Crisis_related_Social_Media_Datasets_ICWSM21.pdf"]}
|
|
@@ -274,7 +274,7 @@
|
|
| 274 |
{"year":"2021","title":"Deep Learning Transformer Architecture for Named Entity Recognition on Low Resourced Languages: State of the art results","authors":["R Hanslo - arXiv preprint arXiv:2111.00830, 2021"],"snippet":"… model trained on 100 languages uses 2.5 TB of CommonCrawl (CC) data [2]. From the 100 languages used by the XLM-R multilingual masked language model, it is noted that Afrikaans (af) and isiXhosa (xh) are included in the pre-training. …","url":["https://arxiv.org/pdf/2111.00830"]}
|
| 275 |
{"year":"2021","title":"Deep Learning With Anaphora Resolution for the Detection of Tweeters With Depression: Algorithm Development and Validation Study","authors":["A Wongkoblap, MA Vadillo, V Curcin - JMIR Mental Health, 2021"],"snippet":"Background: Mental health problems are widely recognized as a major public health challenge worldwide. This concern highlights the need to develop effective tools for detecting mental health disorders in the population. Social …","url":["https://mental.jmir.org/2021/8/e19824/"]}
|
| 276 |
{"year":"2021","title":"Deep Learning-based Sentiment Analysis of Facebook Data: The Case of Turkish Users","authors":["Ö Çoban, SA Özel, A İnan - The Computer Journal, 2021"],"snippet":"Abstract. Sentiment analysis (SA) is an essential task for many domains where it is crucial to know users' public opinion about events, products, brands, politi.","url":["https://academic.oup.com/comjnl/advance-article-abstract/doi/10.1093/comjnl/bxaa172/6095851"]}
|
| 277 |
-
{"year":"2021","title":"DeepEva: A
|
| 278 |
{"year":"2021","title":"Delving into Deep Imbalanced Regression","authors":["Y Yang, K Zha, YC Chen, H Wang, D Katabi - arXiv preprint arXiv:2102.09554, 2021"],"snippet":"Page 1. Delving into Deep Imbalanced Regression Yuzhe Yang 1 Kaiwen Zha 1 Ying-Cong Chen 1 Hao Wang 2 Dina Katabi 1 Abstract Real-world data often exhibit imbalanced distributions, where certain target values …","url":["https://arxiv.org/pdf/2102.09554"]}
|
| 279 |
{"year":"2021","title":"Democratic Backsliding and Media Responses to Government Repression: Machine Learning Evidence from Tanzania","authors":["FS Adiguzel, D Romero, E Wibbels"],"snippet":"… Using the list of international, regional and national domains, we first check GDELT and the Internet Archive for available links, pull the available web pages from the Common Crawl and from the websites directly. We then initialize Scrapy …","url":["https://mlp.trinity.duke.edu/assets/Tanzania_ML4P.pdf"]}
|
| 280 |
{"year":"2021","title":"Dense Events Grounding in Video","authors":["P Bao, Q Zheng, Y Mu - 2021"],"snippet":"… 2015) as previous methods to extract C3D video features on both datasets. And we use Glove (Jeffrey Pennington and Manning 2014) word embeddings pretrained on Common Crawl to represent each word in the sentences …","url":["http://www.muyadong.com/paper/3254_PeijunB.pdf"]}
|
|
@@ -356,7 +356,7 @@
|
|
| 356 |
{"year":"2021","title":"Error identification for machine translation with metric embedding and attention","authors":["R Rubino, A Fujita, B Marie - Proceedings of the 2nd Workshop on Evaluation and …, 2021"],"snippet":"Abstract Quality Estimation (QE) for Machine Translation has been shown to reach relatively high accuracy in predicting sentence-level scores, relying on pretrained contextual embeddings and human-produced quality scores. However, the lack of …","url":["https://aclanthology.org/2021.eval4nlp-1.15.pdf"]}
|
| 357 |
{"year":"2021","title":"ESPnet-ST IWSLT 2021 Offline Speech Translation System","authors":["H Inaguma, B Yan, S Dalmia, P Gu, J Shi, K Duh… - arXiv preprint arXiv …, 2021"],"snippet":"… Must-C - 0.68M Must-C v2 0.74M ST-TED (cleaned) 0.40M Europarl 1.82M Commoncrawl 2.39M Paracrawl 34.37M NewsCommentary 0.37M WikiTitles 1.38M RAPID 1.63M WikiMatrix 1.57M Table 1: Corpus statistics data was …","url":["https://arxiv.org/pdf/2107.00636"]}
|
| 358 |
{"year":"2021","title":"Est-ce que vous compute? Code-switching, cultural identity, and AI","authors":["A Falbo, T LaCroix - arXiv preprint arXiv:2112.08256, 2021"],"snippet":"… of the Common Crawl dataset, resulting in approximately 14 billion tokens (though they do not provide details of how the Common Crawl … is predominantly English: around 45% of HTML pages in the Common Crawl dataset have English as their …","url":["https://arxiv.org/pdf/2112.08256"]}
|
| 359 |
-
{"year":"2021","title":"Establishing Trustworthiness Through Algorithmic Approaches to Qualitative Research","authors":["H Nguyen, J Ahn, A Belgrave, J Lee, L Cawelti, HE Kim… - International Conference on …, 2021","HE Kim, Y Prado, R Santagata, A Villavicencio - … 2020, Malibu, CA, USA, February 1-3 …, 2021"],"snippet":"… The model contains 300 dimensional word vectors that were trained on a vocabulary of 2 million words from web page data (Common Crawl dataset; GloVe,
|
| 360 |
{"year":"2021","title":"Estimating the Effects of Text Genre, Image Resolution and Algorithmic Complexity needed for Sinhala Optical Character Recognition","authors":["I Anuradha, C Liyanage, R Weerasinghe - International Journal on Advances in ICT …, 2021"],"snippet":"… 2) 5million+ sentences in Sinhala common crawler: In 2019, Guzman [16] presented two monolingual corpora for Sinhala. Those were a combination of 155k+ sentences of filtered Sinhala Wikipedia and 5178k+ sentences of Sinhala common crawl …","url":["https://icter.sljol.info/articles/10.4038/icter.v14i3.7231/galley/5596/download/"]}
|
| 361 |
{"year":"2021","title":"Estimation of Imageability Ratings of English Words Using Neural Networks","authors":["VV Bochkarev, AV Savinkov, AV Shevlyakova - Mexican International Conference on …, 2021"],"snippet":"… (a set of CommonCrawl vectors). The values of the correlation coefficients obtained in [6] are shown in the last row of the table. We chose the best values from those given in [6] that were obtained also using the fastText vectors trained on the …","url":["https://link.springer.com/chapter/10.1007/978-3-030-89820-5_5"]}
|
| 362 |
{"year":"2021","title":"Evaluating and Explaining Natural Language Generation with GenX","authors":["K Duskin, S Sharma, JY Yun, E Saldanha, D Arendt - … on Data Science with Human in …, 2021"],"snippet":"… While we demonstrated utility on datasets with tens of thousands of text examples, the nearest neighbor approach used would become intractable on massive text corpora such as CommonCrawl 7. This limits GenX to …","url":["https://www.aclweb.org/anthology/2021.dash-1.12.pdf"]}
|
|
@@ -365,7 +365,7 @@
|
|
| 365 |
{"year":"2021","title":"Evaluating Off-the-Shelf Machine Listening and Natural Language Models for Automated Audio Captioning","authors":["B Weck, X Favory, K Drossos, X Serra - arXiv preprint arXiv:2110.07410, 2021"],"snippet":"… We use the publicly available model trained with subword information on the Common Crawl corpus, which contains 600B tokens and is significantly larger than the corpora used for the Glove and word2vec model [38]. We employ BERT as our …","url":["https://arxiv.org/pdf/2110.07410"]}
|
| 366 |
{"year":"2021","title":"Evaluating Sequence-to-Sequence Modelling for Dialogue State Tracking","authors":["M Tuli, S Agrawal"],"snippet":"… 3.2.1 Word embeddings. We use GloVe [21] pretrained word embeddings, as done in [9]. We use the GloVe-840B-300D embeddings: 300-dimensional embeddings trained on 840B tokens from Common Crawl. 3.2.2 Models …","url":["https://www.mathieutuli.com/docs/nmt_for_dst.pdf"]}
|
| 367 |
{"year":"2021","title":"Evaluating the Evaluation Metrics for Style Transfer: A Case Study in Multilingual Formality Transfer","authors":["E Briakou, S Agrawal, J Tetreault, M Carpuat - arXiv preprint arXiv:2110.10668, 2021"],"snippet":"While the field of style transfer (ST) has been growing rapidly, it has been hampered by a lack of standardized practices for automatic evaluation. In this paper, we evaluate leading ST automatic metrics on the oft-researched task of formality style …","url":["https://arxiv.org/pdf/2110.10668"]}
|
| 368 |
-
{"year":"2021","title":"Evaluating the Text-to-SQL Capabilities of Large Language Models","authors":["MVAEX TS","N Rajkumar, R Li, D Bahdanau - arXiv preprint arXiv:2204.00498, 2022"],"snippet":"
|
| 369 |
{"year":"2021","title":"Evaluating the textbugger NLP Attack on","authors":["S Eicher"],"snippet":"… 1 Introduction The ubiquity of deep learning models which learn from their user-base, like social recommenders or CAPTCHA[4], or other relatively unsupervised datasets, like the Common Crawl, in practice has given rise …","url":["http://cs229.stanford.edu/proj2021spr/report2/82526938.pdf"]}
|
| 370 |
{"year":"2021","title":"Evaluating Word Embeddings with Categorical Modularity","authors":["S Casacuberta, K Halevy, DE Blasi - arXiv preprint arXiv:2106.00877, 2021"],"snippet":"… these embeddings. FastText. Monolingual embeddings for 157 languages trained on Common Crawl and Wikipedia that use CBOW with position-weights and character n-grams (Bojanowski et al., 2017). MUSE. Cross-lingual …","url":["https://arxiv.org/pdf/2106.00877"]}
|
| 371 |
{"year":"2021","title":"Evaluation and Interpretation of Word Embeddings","authors":["J Kuchár"],"snippet":"… 2.3.1 Facebook's Wikipedia and Common Crawl FastText Word Embeddings for 157 languages Produced by Grave et al. [8], the Facebook's Wikipedia and Common Crawl fastText word embeddings may be the most-used and …","url":["https://is.muni.cz/th/eycxc/Evaluation_and_interpretation_of_word_embeddings_Archive.pdf"]}
|
|
@@ -832,7 +832,7 @@
|
|
| 832 |
{"year":"2021","title":"SAUCE: Truncated Sparse Document Signature Bit-Vectors for Fast Web-Scale Corpus Expansion","authors":["M Wahed, D Gruhl, A Alba, AL Gentile, P Ristoski… - arXiv preprint arXiv …, 2021"],"snippet":"… This experiment facilitates evaluation on a limited seed corpus scenario. The document collection is a snapshot (approximately 200 million documents) of the Common Crawl web archive1, which is regularly updated and consists …","url":["https://arxiv.org/pdf/2108.11948"]}
|
| 833 |
{"year":"2021","title":"Say What? Collaborative Pop Lyric Generation Using Multitask Transfer Learning","authors":["N Ram, T Gummadi, R Bhethanabotla, RJ Savery… - Proceedings of the 9th …, 2021"],"snippet":"… The authors opted for an encoder-decoder transformer based design, trained on a variant of the Common Crawl corpus named the Colossal Common Crawl Corpus (or C4 for short). The T5 shows a remarkable ability to tackle many different tasks with …","url":["https://dl.acm.org/doi/abs/10.1145/3472307.3484175"]}
|
| 834 |
{"year":"2021","title":"Scalable Graph Convolutional Variational Autoencoders","authors":["D Unyi, B Gyires-Tóth - 2021 IEEE 15th International Symposium on Applied …, 2021"],"snippet":"… In Reddit, nodes are Reddit posts, and if the same user commented on two posts, a link is drawn between them; node features are GloVe CommonCrawl word vectors [31] based on the average embedding of the …","url":["https://ieeexplore.ieee.org/abstract/document/9465579/"]}
|
| 835 |
-
{"year":"2021","title":"
|
| 836 |
{"year":"2021","title":"Scaling Laws for Transfer","authors":["D Hernandez, J Kaplan, T Henighan, S McCandlish - arXiv preprint arXiv:2102.01293, 2021"],"snippet":"… Pre-trained text models were trained on a mix of WebText2 described in [KMH+20], Common Crawl5 [RSR+20], English Wikipedia, and publicly available Internet Books … 5https://commoncrawl.org/the-data/ 6https://www.gharchive.org/ 6 Page 7. 3 Results …","url":["https://arxiv.org/pdf/2102.01293"]}
|
| 837 |
{"year":"2021","title":"Scarecrow: A Framework for Scrutinizing Machine Text","authors":["Y Dou, M Forbes, R Koncel-Kedziorski, NA Smith… - arXiv preprint arXiv …, 2021"],"snippet":"… GPT-3 DaVinci (Brown et al., 2020) The 175B parameter variant of GPT-3, which is trained on a version of the Common Crawl web scrape with additional filtering and deduplicating. These model choices allow us to study several …","url":["https://arxiv.org/pdf/2107.01294"]}
|
| 838 |
{"year":"2021","title":"SCOPA: Soft Code-Switching and Pairwise Alignment for Zero-Shot Cross-lingual Transfer","authors":["D Lee, J Lee, G Lee, B Chun, S Hwang - Proceedings of the 30th ACM International …, 2021"],"snippet":"… To enhance such transfer, XLM-R [4] is pre-trained on 100 languages with CommonCrawl Corpora, supervised by translational objectives, to transfer from resource-rich languages (eg, English and Chinese) to resource-poor languages …","url":["https://dl.acm.org/doi/abs/10.1145/3459637.3482176"]}
|
|
@@ -891,7 +891,7 @@
|
|
| 891 |
{"year":"2021","title":"Stacked Embeddings and Multiple Fine-Tuned XLM-RoBERTa Models for Enhanced Hostility Identification","authors":["XLM Fine-Tuned - Combating Online Hostile Posts in Regional …"],"snippet":"… XLM-RoBERTa. XLM-RoBERTa [10] is a large multilingual model trained on the CommonCrawl Dataset. There are two versions: base and large; both have around 250k words in the vocabulary, and the base has 250M parameters, while large has 560M …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=yD8oEAAAQBAJ&oi=fnd&pg=PA224&dq=commoncrawl&ots=ai1kFeEaRU&sig=UkWmp4nmnIiSJ004dO8RQkmOvmk"]}
|
| 892 |
{"year":"2021","title":"StarryThoughts: Facilitating Diverse Opinion Exploration on Social Issues","authors":["H Kim, H Kim, KJ Jo, J Kim - Proceedings of the ACM on Human-Computer …, 2021"],"snippet":"… After translation, we computed each opinion's embedded vector with the algorithm using pre-trained GloVe word embeddings built upon 42B tokens from Common Crawl [49]. 4.3 Implementation details The front-end of StarryThoughts is implemented with React …","url":["https://dl.acm.org/doi/abs/10.1145/3449140"]}
|
| 893 |
{"year":"2021","title":"Step-unrolled Denoising Autoencoders for Text Generation","authors":["N Savinov, J Chung, M Binkowski, E Elsen, A Oord - arXiv preprint arXiv:2112.06749, 2021"],"snippet":"… results on unconditional language modeling on the Colossal Cleaned Common Crawl dataset and a dataset of Python code from GitHub. … • We demonstrate good qualitative results for unconditional generation and inpainting on Colossal Clean …","url":["https://arxiv.org/pdf/2112.06749"]}
|
| 894 |
-
{"year":"2021","title":"Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you?","authors":["R Choenni, E Shutova, R van Rooij - arXiv preprint arXiv:2109.10052, 2021","WWE Do, FLDT Fine-Tuning"],"snippet":"
|
| 895 |
{"year":"2021","title":"Stock Volume Prediction Based on Polarity of Tweets, News, and Historical Data Using Deep Learning","authors":["N Jawahar, J Chelladurai, I Sakthivel, B Bajracharya - 2020 2nd International …, 2020"],"snippet":"… token in order to predict the entity of that token. The CNN core model is pre-trained with GloVe vectors on Common Crawl, with 86.43% precision and 86.37% recall for NER. A python script is written that uses Psycopg to extract …","url":["https://dl.acm.org/doi/abs/10.1145/3440054.3440063"]}
|
| 896 |
{"year":"2021","title":"Storytelling Exhibitions: Identity, Truth and Wonder","authors":["P Hughes - 2021"],"snippet":""}
|
| 897 |
{"year":"2021","title":"Strategyproof Learning: Building Trustworthy User-Generated Datasets","authors":["S Farhadkhani, R Guerraoui, LN Hoang - arXiv preprint arXiv:2106.02398, 2021"],"snippet":"Page 1. arXiv:2106.02398v1 [cs.LG] 4 Jun 2021 Strategyproof Learning: Building Trustworthy User-Generated Datasets Sadegh Farhadkhani IC School, EPFL Lausanne, Switzerland [email protected] Rachid Guerraoui …","url":["https://arxiv.org/pdf/2106.02398"]}
|
|
@@ -1057,7 +1057,7 @@
|
|
| 1057 |
{"year":"2021","title":"Visualizing large-scale high-dimensional data via hierarchical embedding of KNN graphs","authors":["H Zhu, M Zhu, Y Feng, D Cai, Y Hu, S Wu, X Wu… - Visual Informatics, 2021"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S2468502X21000292"]}
|
| 1058 |
{"year":"2021","title":"VITALITY: Promoting Serendipitous Discovery of Academic Literature with Transformers & Visual Analytics","authors":["A Narechania, A Karduni, R Wesslen, E Wall - arXiv preprint arXiv:2108.03366, 2021"],"snippet":"… Unlike many pre-trained language models that use a general corpus like Wikipedia or the Common Crawl [14, 43, 49], SPECTER was pre-trained on academic literature (sciBERT [?]) and fine-tuned with citations which …","url":["https://arxiv.org/pdf/2108.03366"]}
|
| 1059 |
{"year":"2021","title":"VOH. CoLAB at TREC 2020 Health Misinformation Track⋆","authors":["SN Gonçalves, F Martins"],"snippet":"… 2.1 Collection The collection contains 65 million news articles from CommonCrawl News4 corresponding to the period from January to April of 2020 … Recently, vinegar has been promoted as a disinfectant (...) 4 …","url":["https://trec.nist.gov/pubs/trec29/papers/vohcolab.HM.pdf"]}
|
| 1060 |
-
{"year":"2021","title":"Voted In, Standing Out: Public Response to Immigrants' Political Accession","authors":["G Grossman, S Zonszein - 2021"],"snippet":"… newspapers, covering the general elections from 2010–2019.8 This data is from Common Crawl, which is an open repository of web crawl data. We assume that an article refers to a candidate's ethnic group when three conditions are met: 1) the publication date is …","url":["https://files.osf.io/v1/resources/xd4wk/providers/osfstorage/614782978ae0920335d8c84c?action=download&direct&version=2"]}
|
| 1061 |
{"year":"2021","title":"We Need to Talk About Data: The Importance of Data Readiness in Natural Language Processing","authors":["F Olsson, M Sahlgren - arXiv preprint arXiv:2110.05464, 2021"],"snippet":"In this paper, we identify the state of data as being an important reason for failure in applied Natural Language Processing (NLP) projects. We argue that there is a gap between academic research in NLP and its application to problems outside …","url":["https://arxiv.org/pdf/2110.05464"]}
|
| 1062 |
{"year":"2021","title":"Web Archive Analytics","authors":["M Völske, J Bevendorff, J Kiesel, B Stein, M Fröbe… - INFORMATIK 2020, 2021"],"snippet":"… in Figure 5, beginning with the bottom-most data acquisition layerȷ Primary sources for data ingestion include web crawls and web archives, such as the aforementioned Internet Archive, the Common Crawl,13 the older … 13 …","url":["https://dl.gi.de/bitstream/handle/20.500.12116/34759/A8-1.pdf?sequence=1&isAllowed=y"]}
|
| 1063 |
{"year":"2021","title":"Web Content Authentication: A Machine Learning Approach to Identify Fake and Authentic Web Pages on Internet","authors":["J Ashok, P Badoni - … Technology for Competitive Strategies (ICTCS 2020) …"],"snippet":"… SpringerLink (Springer, 21 June 2017). www. link. springer. com/chapter/10.1007/978- 3-319-69784-0_15 42. Link to GitHub where code is hosted. https://github. com/Jkrish1011/Web-Content-Authentic ator 43. https://commoncrawl. org/","url":["http://books.google.de/books?hl=en&lr=lang_en&id=Dwo3EAAAQBAJ&oi=fnd&pg=PA85&dq=commoncrawl&ots=RCTLIhR7zF&sig=_9zgxY48BSktz0Jk-c6Y7XTbVf0"]}
|
|
|
|
| 132 |
{"year":"2021","title":"Authorship Weightage Algorithm for Academic publications: A new calculation and ACES webserver for determining expertise","authors":["WL Wu, O Tan, KF Chan, NB Ong, D Gunasegaran… - Methods and Protocols, 2021"],"snippet":"… the back-end server. These word vectors were trained on Common Crawl (https://commoncrawl.org (last accessed on 28 April 2021)) using fastText [17], and are used to map the processed query to its corresponding values …","url":["https://www.mdpi.com/2409-9279/4/2/41/pdf"]}
|
| 133 |
{"year":"2021","title":"Automated Change Detection in Privacy Policies","authors":["A Adhikari - 2020"],"snippet":"Page 1. University of Denver Digital Commons @ DU Electronic Theses and Dissertations Graduate Studies 2020 Automated Change Detection in Privacy Policies Andrick Adhikari Follow this and additional works at: https://digitalcommons.du.edu/etd …","url":["https://digitalcommons.du.edu/cgi/viewcontent.cgi?article=2702&context=etd"]}
|
| 134 |
{"year":"2021","title":"Automated essay scoring: A review of the field","authors":["P Lagakis, S Demetriadis - … International Conference on Computer, Information and …, 2021"],"snippet":"… Transformer models make use of those huge datasets of existing general text data, such as Wikipedia Corpus and Common Crawl, to pretrain multilayer neural networks with context-sensitive meaning of, and relations between, words, such as …","url":["https://ieeexplore.ieee.org/abstract/document/9618476/"]}
|
| 135 |
+
{"year":"2021","title":"Automated Grading of Exam Responses: An Extensive Classification Benchmark","authors":["A Farazouli, Z Lee, P Papapetrou, U Fors - … Science: 24th International Conference, DS 2021 …","J Ljungman, V Lislevand, J Pavlopoulos, A Farazouli… - International Conference on …, 2021"],"snippet":"… This method proves that training BERT with alternative design choices and with more data, including the CommonCrawl News dataset, improves the performance on downstream tasks. Open image in new window Fig …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=IydHEAAAQBAJ&oi=fnd&pg=PA3&dq=commoncrawl&ots=QIe2sENq0_&sig=LQ1NnDlylvNDV4-vNPAiGJEMZd4","https://link.springer.com/chapter/10.1007/978-3-030-88942-5_1"]}
|
| 136 |
{"year":"2021","title":"Automated identification of bias inducing words in news articles using linguistic and context-oriented features","authors":["T Spinde, L Rudnitckaia, J Mitrović, F Hamborg… - Information Processing & …, 2021"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S0306457321000157"]}
|
| 137 |
{"year":"2021","title":"Automated methods for Question-Answering in Icelandic","authors":["V Snæbjarnarson"],"snippet":"… The source of the data is the open internet, made accessible to those with relatively modest computing resources and disk storage through the targeted use of the Common Crawl datasets that comprise petabytes of data. Prior work has focused on the …","url":["https://vesteinn.is/thesis_150921.pdf"]}
|
| 138 |
{"year":"2021","title":"Automatic Detection of Fake","authors":["BM Bažík"],"snippet":"… For the data, they created the RealNews dataset, a large corpus of news articles from Common Crawl1. Fake News Detection Using Deep Learning Techniques [11] compared Logistic Regression (LR), Naive Bayes (NB) …","url":["https://is.muni.cz/th/hk1px/Martin_Bazik_master_thesis.pdf"]}
|
|
|
|
| 233 |
{"year":"2021","title":"CopyCat: Near-Duplicates within and between the ClueWeb and the Common Crawl","authors":["M Fröbe, J Bevendorff, L Gienapp, M Völske, B Stein… - 2021"],"snippet":"The amount of near-duplicates in web crawls like the ClueWeb or Common Crawl demands from their users either to develop a preprocessing pipeline for deduplication, which is costly both computationally and in person hours, or accepting …","url":["https://webis.de/downloads/publications/papers/froebe_2021.pdf"]}
|
| 234 |
{"year":"2021","title":"Corpulyzer: A Novel Framework for Building Low Resource Language Corpora","authors":["B Tahir, MA Mehmood - IEEE Access"],"snippet":"… Leveraging dataset from Common Crawl Corpus (CCC), first, we prepare a list of seed URLs by filtering the Urdu language webpages … INDEX TERMS Common crawl, web crawling, text corpus, corpus analysis, regional languages corpora …","url":["https://ieeexplore.ieee.org/iel7/6287639/9312710/09316706.pdf"]}
|
| 235 |
{"year":"2021","title":"Corpus and Baseline Model for Domain-Specific Entity Recognition in German","authors":["S Torge, W Hahn, R Jäkel, WE Nagel - 2020 6th IEEE Congress on Information …, 2021"],"snippet":"… Deepset AI4 FastText Wikipedia 100 Deepset AI GloVe Wikipedia 300 FastText [26] FastText Wikipedia, 300 Common Crawl Kyubyong5 Word2Vec Wikipedia 300 SB Tweets s[27] FastText 50 Mio Tweets 100 SB Tweets l[27] FastText 50 Mio Tweets 300 …","url":["https://ieeexplore.ieee.org/abstract/document/9357189/"]}
|
| 236 |
+
{"year":"2021","title":"COVIDSenti: A Large-Scale Benchmark Twitter Data Set for COVID-19 Sentiment Analysis","authors":["P BALAJI","U Naseem, I Razzak, M Khushi, PW Eklund, J Kim - IEEE Transactions on …, 2021"],"snippet":"… Term frequency-inverse document frequency (TF-IDF) has been used for vectorization. Similarly, for word embeddings, pretrained Word2Vec, GloVe, and fastText embeddings trained on Common Crawl and Wikipedia are used and have 300-D vectors …","url":["https://ieeexplore.ieee.org/abstract/document/9340540/","https://sist.sathyabama.ac.in/sist_naac/documents/1.3.4/1822-b.tech-it-batchno-359.pdf"]}
|
| 237 |
{"year":"2021","title":"CRASS: A Novel Data Set and Benchmark to Test Counterfactual Reasoning of Large Language Models","authors":["J Frohberg, F Binder - arXiv preprint arXiv:2112.11941, 2021"],"snippet":"We introduce the CRASS (counterfactual reasoning assessment) data set and benchmark utilizing questionized counterfactual conditionals as a novel and powerful tool to evaluate large language models. We present the data set design …","url":["https://arxiv.org/pdf/2112.11941"]}
|
| 238 |
{"year":"2021","title":"Creation, Enrichment and Application of Knowledge Graphs","authors":["S Gottschalk - 2021"],"snippet":"Page 1. CREATION, ENRICHMENT AND APPLICATION OF KNOWLEDGE GRAPHS Von der Fakultät für Elektrotechnik und Informatik der Gottfried Wilhelm Leibniz Universität Hannover zur Erlangung des Grades DOKTOR …","url":["https://www.repo.uni-hannover.de/bitstream/handle/123456789/11125/phd_thesis_gottschalk.pdf?sequence=1"]}
|
| 239 |
{"year":"2021","title":"CrisisBench: Benchmarking Crisis-related Social Media Datasets for Humanitarian Information Processing","authors":["F Alam, H Sajjad, M Imran, F Ofli - 2021"],"snippet":"… 2017). fastText: For the fastText (Joulin et al. 2017), we used pretrained embeddings trained on Common Crawl, which is re- leased by fastText for English. Transformer models: Pre-trained models have achieved …","url":["https://mimran.me/papers/CrisisBench_Benchmarking_Crisis_related_Social_Media_Datasets_ICWSM21.pdf"]}
|
|
|
|
| 274 |
{"year":"2021","title":"Deep Learning Transformer Architecture for Named Entity Recognition on Low Resourced Languages: State of the art results","authors":["R Hanslo - arXiv preprint arXiv:2111.00830, 2021"],"snippet":"… model trained on 100 languages uses 2.5 TB of CommonCrawl (CC) data [2]. From the 100 languages used by the XLM-R multilingual masked language model, it is noted that Afrikaans (af) and isiXhosa (xh) are included in the pre-training. …","url":["https://arxiv.org/pdf/2111.00830"]}
|
| 275 |
{"year":"2021","title":"Deep Learning With Anaphora Resolution for the Detection of Tweeters With Depression: Algorithm Development and Validation Study","authors":["A Wongkoblap, MA Vadillo, V Curcin - JMIR Mental Health, 2021"],"snippet":"Background: Mental health problems are widely recognized as a major public health challenge worldwide. This concern highlights the need to develop effective tools for detecting mental health disorders in the population. Social …","url":["https://mental.jmir.org/2021/8/e19824/"]}
|
| 276 |
{"year":"2021","title":"Deep Learning-based Sentiment Analysis of Facebook Data: The Case of Turkish Users","authors":["Ö Çoban, SA Özel, A İnan - The Computer Journal, 2021"],"snippet":"Abstract. Sentiment analysis (SA) is an essential task for many domains where it is crucial to know users' public opinion about events, products, brands, politi.","url":["https://academic.oup.com/comjnl/advance-article-abstract/doi/10.1093/comjnl/bxaa172/6095851"]}
|
| 277 |
+
{"year":"2021","title":"DeepEva: A deep neural network architecture for assessing sentence complexity in Italian and English languages","authors":["GL Bosco, G Pilato, D Schicchi - Array, 2021","GL Boscoa, G Pilato, D Schicchic"],"snippet":"Abstract Automatic Text Complexity Evaluation (ATE) is a research field that aims at creating new methodologies to make autonomous the process of the text complexity evaluation, that is the study of the text-linguistic features (eg, lexical, syntactical …","url":["https://iris.unipa.it/retrieve/handle/10447/524419/1257448/1-s2.0-S2590005621000424-main.pdf","https://www.sciencedirect.com/science/article/pii/S2590005621000424"]}
|
| 278 |
{"year":"2021","title":"Delving into Deep Imbalanced Regression","authors":["Y Yang, K Zha, YC Chen, H Wang, D Katabi - arXiv preprint arXiv:2102.09554, 2021"],"snippet":"Page 1. Delving into Deep Imbalanced Regression Yuzhe Yang 1 Kaiwen Zha 1 Ying-Cong Chen 1 Hao Wang 2 Dina Katabi 1 Abstract Real-world data often exhibit imbalanced distributions, where certain target values …","url":["https://arxiv.org/pdf/2102.09554"]}
|
| 279 |
{"year":"2021","title":"Democratic Backsliding and Media Responses to Government Repression: Machine Learning Evidence from Tanzania","authors":["FS Adiguzel, D Romero, E Wibbels"],"snippet":"… Using the list of international, regional and national domains, we first check GDELT and the Internet Archive for available links, pull the available web pages from the Common Crawl and from the websites directly. We then initialize Scrapy …","url":["https://mlp.trinity.duke.edu/assets/Tanzania_ML4P.pdf"]}
|
| 280 |
{"year":"2021","title":"Dense Events Grounding in Video","authors":["P Bao, Q Zheng, Y Mu - 2021"],"snippet":"… 2015) as previous methods to extract C3D video features on both datasets. And we use Glove (Jeffrey Pennington and Manning 2014) word embeddings pretrained on Common Crawl to represent each word in the sentences …","url":["http://www.muyadong.com/paper/3254_PeijunB.pdf"]}
|
|
|
|
| 356 |
{"year":"2021","title":"Error identification for machine translation with metric embedding and attention","authors":["R Rubino, A Fujita, B Marie - Proceedings of the 2nd Workshop on Evaluation and …, 2021"],"snippet":"Abstract Quality Estimation (QE) for Machine Translation has been shown to reach relatively high accuracy in predicting sentence-level scores, relying on pretrained contextual embeddings and human-produced quality scores. However, the lack of …","url":["https://aclanthology.org/2021.eval4nlp-1.15.pdf"]}
|
| 357 |
{"year":"2021","title":"ESPnet-ST IWSLT 2021 Offline Speech Translation System","authors":["H Inaguma, B Yan, S Dalmia, P Gu, J Shi, K Duh… - arXiv preprint arXiv …, 2021"],"snippet":"… Must-C - 0.68M Must-C v2 0.74M ST-TED (cleaned) 0.40M Europarl 1.82M Commoncrawl 2.39M Paracrawl 34.37M NewsCommentary 0.37M WikiTitles 1.38M RAPID 1.63M WikiMatrix 1.57M Table 1: Corpus statistics data was …","url":["https://arxiv.org/pdf/2107.00636"]}
|
| 358 |
{"year":"2021","title":"Est-ce que vous compute? Code-switching, cultural identity, and AI","authors":["A Falbo, T LaCroix - arXiv preprint arXiv:2112.08256, 2021"],"snippet":"… of the Common Crawl dataset, resulting in approximately 14 billion tokens (though they do not provide details of how the Common Crawl … is predominantly English: around 45% of HTML pages in the Common Crawl dataset have English as their …","url":["https://arxiv.org/pdf/2112.08256"]}
|
| 359 |
+
{"year":"2021","title":"Establishing Trustworthiness Through Algorithmic Approaches to Qualitative Research","authors":["H Nguyen, J Ahn, A Belgrave, J Lee, L Cawelti, HE Kim… - International Conference on …, 2021","HE Kim, Y Prado, R Santagata, A Villavicencio - … 2020, Malibu, CA, USA, February 1-3 …, 2021"],"snippet":"… The model contains 300 dimensional word vectors that were trained on a vocabulary of 2 million words from web page data (Common Crawl dataset; GloVe,[21]). We then worked to cluster words together using the word embedding developed with spaCy …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=k90XEAAAQBAJ&oi=fnd&pg=PA47&dq=commoncrawl&ots=6DvGd4cN1o&sig=FHX30JOd7Xl7m8NWbE-QfeuJbks","https://link.springer.com/chapter/10.1007/978-3-030-67788-6_4"]}
|
| 360 |
{"year":"2021","title":"Estimating the Effects of Text Genre, Image Resolution and Algorithmic Complexity needed for Sinhala Optical Character Recognition","authors":["I Anuradha, C Liyanage, R Weerasinghe - International Journal on Advances in ICT …, 2021"],"snippet":"… 2) 5million+ sentences in Sinhala common crawler: In 2019, Guzman [16] presented two monolingual corpora for Sinhala. Those were a combination of 155k+ sentences of filtered Sinhala Wikipedia and 5178k+ sentences of Sinhala common crawl …","url":["https://icter.sljol.info/articles/10.4038/icter.v14i3.7231/galley/5596/download/"]}
|
| 361 |
{"year":"2021","title":"Estimation of Imageability Ratings of English Words Using Neural Networks","authors":["VV Bochkarev, AV Savinkov, AV Shevlyakova - Mexican International Conference on …, 2021"],"snippet":"… (a set of CommonCrawl vectors). The values of the correlation coefficients obtained in [6] are shown in the last row of the table. We chose the best values from those given in [6] that were obtained also using the fastText vectors trained on the …","url":["https://link.springer.com/chapter/10.1007/978-3-030-89820-5_5"]}
|
| 362 |
{"year":"2021","title":"Evaluating and Explaining Natural Language Generation with GenX","authors":["K Duskin, S Sharma, JY Yun, E Saldanha, D Arendt - … on Data Science with Human in …, 2021"],"snippet":"… While we demonstrated utility on datasets with tens of thousands of text examples, the nearest neighbor approach used would become intractable on massive text corpora such as CommonCrawl 7. This limits GenX to …","url":["https://www.aclweb.org/anthology/2021.dash-1.12.pdf"]}
|
|
|
|
| 365 |
{"year":"2021","title":"Evaluating Off-the-Shelf Machine Listening and Natural Language Models for Automated Audio Captioning","authors":["B Weck, X Favory, K Drossos, X Serra - arXiv preprint arXiv:2110.07410, 2021"],"snippet":"… We use the publicly available model trained with subword information on the Common Crawl corpus, which contains 600B tokens and is significantly larger than the corpora used for the Glove and word2vec model [38]. We employ BERT as our …","url":["https://arxiv.org/pdf/2110.07410"]}
|
| 366 |
{"year":"2021","title":"Evaluating Sequence-to-Sequence Modelling for Dialogue State Tracking","authors":["M Tuli, S Agrawal"],"snippet":"… 3.2.1 Word embeddings. We use GloVe [21] pretrained word embeddings, as done in [9]. We use the GloVe-840B-300D embeddings: 300-dimensional embeddings trained on 840B tokens from Common Crawl. 3.2.2 Models …","url":["https://www.mathieutuli.com/docs/nmt_for_dst.pdf"]}
|
| 367 |
{"year":"2021","title":"Evaluating the Evaluation Metrics for Style Transfer: A Case Study in Multilingual Formality Transfer","authors":["E Briakou, S Agrawal, J Tetreault, M Carpuat - arXiv preprint arXiv:2110.10668, 2021"],"snippet":"While the field of style transfer (ST) has been growing rapidly, it has been hampered by a lack of standardized practices for automatic evaluation. In this paper, we evaluate leading ST automatic metrics on the oft-researched task of formality style …","url":["https://arxiv.org/pdf/2110.10668"]}
|
| 368 |
+
{"year":"2021","title":"Evaluating the Text-to-SQL Capabilities of Large Language Models","authors":["MVAEX TS","N Rajkumar, R Li, D Bahdanau - arXiv preprint arXiv:2204.00498, 2022"],"snippet":"… Starting from public checkpoints pretrained 062 on Common Crawl, the T5 model is finetuned on 063 Spider to predict the output SQL, conditioned on 064 the question and schema. The 3B parameter T5 065 model is currently the state-of-the-art …","url":["https://arxiv.org/pdf/2204.00498","https://openreview.net/pdf?id=lYli-bAuK54"]}
|
| 369 |
{"year":"2021","title":"Evaluating the textbugger NLP Attack on","authors":["S Eicher"],"snippet":"… 1 Introduction The ubiquity of deep learning models which learn from their user-base, like social recommenders or CAPTCHA[4], or other relatively unsupervised datasets, like the Common Crawl, in practice has given rise …","url":["http://cs229.stanford.edu/proj2021spr/report2/82526938.pdf"]}
|
| 370 |
{"year":"2021","title":"Evaluating Word Embeddings with Categorical Modularity","authors":["S Casacuberta, K Halevy, DE Blasi - arXiv preprint arXiv:2106.00877, 2021"],"snippet":"… these embeddings. FastText. Monolingual embeddings for 157 languages trained on Common Crawl and Wikipedia that use CBOW with position-weights and character n-grams (Bojanowski et al., 2017). MUSE. Cross-lingual …","url":["https://arxiv.org/pdf/2106.00877"]}
|
| 371 |
{"year":"2021","title":"Evaluation and Interpretation of Word Embeddings","authors":["J Kuchár"],"snippet":"… 2.3.1 Facebook's Wikipedia and Common Crawl FastText Word Embeddings for 157 languages Produced by Grave et al. [8], the Facebook's Wikipedia and Common Crawl fastText word embeddings may be the most-used and …","url":["https://is.muni.cz/th/eycxc/Evaluation_and_interpretation_of_word_embeddings_Archive.pdf"]}
|
|
|
|
| 832 |
{"year":"2021","title":"SAUCE: Truncated Sparse Document Signature Bit-Vectors for Fast Web-Scale Corpus Expansion","authors":["M Wahed, D Gruhl, A Alba, AL Gentile, P Ristoski… - arXiv preprint arXiv …, 2021"],"snippet":"… This experiment facilitates evaluation on a limited seed corpus scenario. The document collection is a snapshot (approximately 200 million documents) of the Common Crawl web archive1, which is regularly updated and consists …","url":["https://arxiv.org/pdf/2108.11948"]}
|
| 833 |
{"year":"2021","title":"Say What? Collaborative Pop Lyric Generation Using Multitask Transfer Learning","authors":["N Ram, T Gummadi, R Bhethanabotla, RJ Savery… - Proceedings of the 9th …, 2021"],"snippet":"… The authors opted for an encoder-decoder transformer based design, trained on a variant of the Common Crawl corpus named the Colossal Common Crawl Corpus (or C4 for short). The T5 shows a remarkable ability to tackle many different tasks with …","url":["https://dl.acm.org/doi/abs/10.1145/3472307.3484175"]}
|
| 834 |
{"year":"2021","title":"Scalable Graph Convolutional Variational Autoencoders","authors":["D Unyi, B Gyires-Tóth - 2021 IEEE 15th International Symposium on Applied …, 2021"],"snippet":"… In Reddit, nodes are Reddit posts, and if the same user commented on two posts, a link is drawn between them; node features are GloVe CommonCrawl word vectors [31] based on the average embedding of the …","url":["https://ieeexplore.ieee.org/abstract/document/9465579/"]}
|
| 835 |
+
{"year":"2021","title":"Scale Efficiently: Insights from Pre-training and Fine-tuning Transformers","authors":["AT Vaswani, D Yogatama, D Metzler, HW Chung… - 2022","FT TRANSFORMERS","Y Tay, M Dehghani, J Rao, W Fedus, S Abnar… - arXiv preprint arXiv …, 2021"],"snippet":"Kaplan et al. argues that the performance of a Transformer model strongly depends on the model size, but only weakly on the model shape. Our work empirically confirms their results for upstream training, but then reveals a striking discrepancy …","url":["https://arxiv.org/pdf/2109.10686","https://openreview.net/pdf?id=f2OYVDyfIB","https://research.google/pubs/pub52066.pdf"]}
|
| 836 |
{"year":"2021","title":"Scaling Laws for Transfer","authors":["D Hernandez, J Kaplan, T Henighan, S McCandlish - arXiv preprint arXiv:2102.01293, 2021"],"snippet":"… Pre-trained text models were trained on a mix of WebText2 described in [KMH+20], Common Crawl5 [RSR+20], English Wikipedia, and publicly available Internet Books … 5https://commoncrawl.org/the-data/ 6https://www.gharchive.org/ 6 Page 7. 3 Results …","url":["https://arxiv.org/pdf/2102.01293"]}
|
| 837 |
{"year":"2021","title":"Scarecrow: A Framework for Scrutinizing Machine Text","authors":["Y Dou, M Forbes, R Koncel-Kedziorski, NA Smith… - arXiv preprint arXiv …, 2021"],"snippet":"… GPT-3 DaVinci (Brown et al., 2020) The 175B parameter variant of GPT-3, which is trained on a version of the Common Crawl web scrape with additional filtering and deduplicating. These model choices allow us to study several …","url":["https://arxiv.org/pdf/2107.01294"]}
|
| 838 |
{"year":"2021","title":"SCOPA: Soft Code-Switching and Pairwise Alignment for Zero-Shot Cross-lingual Transfer","authors":["D Lee, J Lee, G Lee, B Chun, S Hwang - Proceedings of the 30th ACM International …, 2021"],"snippet":"… To enhance such transfer, XLM-R [4] is pre-trained on 100 languages with CommonCrawl Corpora, supervised by translational objectives, to transfer from resource-rich languages (eg, English and Chinese) to resource-poor languages …","url":["https://dl.acm.org/doi/abs/10.1145/3459637.3482176"]}
|
|
|
|
| 891 |
{"year":"2021","title":"Stacked Embeddings and Multiple Fine-Tuned XLM-RoBERTa Models for Enhanced Hostility Identification","authors":["XLM Fine-Tuned - Combating Online Hostile Posts in Regional …"],"snippet":"… XLM-RoBERTa. XLM-RoBERTa [10] is a large multilingual model trained on the CommonCrawl Dataset. There are two versions: base and large; both have around 250k words in the vocabulary, and the base has 250M parameters, while large has 560M …","url":["http://books.google.de/books?hl=en&lr=lang_en&id=yD8oEAAAQBAJ&oi=fnd&pg=PA224&dq=commoncrawl&ots=ai1kFeEaRU&sig=UkWmp4nmnIiSJ004dO8RQkmOvmk"]}
|
| 892 |
{"year":"2021","title":"StarryThoughts: Facilitating Diverse Opinion Exploration on Social Issues","authors":["H Kim, H Kim, KJ Jo, J Kim - Proceedings of the ACM on Human-Computer …, 2021"],"snippet":"… After translation, we computed each opinion's embedded vector with the algorithm using pre-trained GloVe word embeddings built upon 42B tokens from Common Crawl [49]. 4.3 Implementation details The front-end of StarryThoughts is implemented with React …","url":["https://dl.acm.org/doi/abs/10.1145/3449140"]}
|
| 893 |
{"year":"2021","title":"Step-unrolled Denoising Autoencoders for Text Generation","authors":["N Savinov, J Chung, M Binkowski, E Elsen, A Oord - arXiv preprint arXiv:2112.06749, 2021"],"snippet":"… results on unconditional language modeling on the Colossal Cleaned Common Crawl dataset and a dataset of Python code from GitHub. … • We demonstrate good qualitative results for unconditional generation and inpainting on Colossal Clean …","url":["https://arxiv.org/pdf/2112.06749"]}
|
| 894 |
+
{"year":"2021","title":"Stepmothers are mean and academics are pretentious: What do pretrained language models learn about you?","authors":["R Choenni, E Shutova, R van Rooij - arXiv preprint arXiv:2109.10052, 2021","WWE Do, FLDT Fine-Tuning"],"snippet":"… are monolingual and 2 multilingual: BERT (Devlin et al., 2019) uncased trained on the BooksCorpus dataset (Zhu et al., 2015) and English Wikipedia; RoBERTa (Liu et al., 2019), the optimized version of BERT that is in addition …","url":["https://arxiv.org/pdf/2109.10052","https://deepai.org/publication/stepmothers-are-mean-and-academics-are-pretentious-what-do-pretrained-language-models-learn-about-you"]}
|
| 895 |
{"year":"2021","title":"Stock Volume Prediction Based on Polarity of Tweets, News, and Historical Data Using Deep Learning","authors":["N Jawahar, J Chelladurai, I Sakthivel, B Bajracharya - 2020 2nd International …, 2020"],"snippet":"… token in order to predict the entity of that token. The CNN core model is pre-trained with GloVe vectors on Common Crawl, with 86.43% precision and 86.37% recall for NER. A python script is written that uses Psycopg to extract …","url":["https://dl.acm.org/doi/abs/10.1145/3440054.3440063"]}
|
| 896 |
{"year":"2021","title":"Storytelling Exhibitions: Identity, Truth and Wonder","authors":["P Hughes - 2021"],"snippet":""}
|
| 897 |
{"year":"2021","title":"Strategyproof Learning: Building Trustworthy User-Generated Datasets","authors":["S Farhadkhani, R Guerraoui, LN Hoang - arXiv preprint arXiv:2106.02398, 2021"],"snippet":"Page 1. arXiv:2106.02398v1 [cs.LG] 4 Jun 2021 Strategyproof Learning: Building Trustworthy User-Generated Datasets Sadegh Farhadkhani IC School, EPFL Lausanne, Switzerland [email protected] Rachid Guerraoui …","url":["https://arxiv.org/pdf/2106.02398"]}
|
|
|
|
| 1057 |
{"year":"2021","title":"Visualizing large-scale high-dimensional data via hierarchical embedding of KNN graphs","authors":["H Zhu, M Zhu, Y Feng, D Cai, Y Hu, S Wu, X Wu… - Visual Informatics, 2021"],"snippet":"JavaScript is disabled on your browser. Please enable JavaScript to use all the features on this page. Skip to main content Skip to article …","url":["https://www.sciencedirect.com/science/article/pii/S2468502X21000292"]}
|
| 1058 |
{"year":"2021","title":"VITALITY: Promoting Serendipitous Discovery of Academic Literature with Transformers & Visual Analytics","authors":["A Narechania, A Karduni, R Wesslen, E Wall - arXiv preprint arXiv:2108.03366, 2021"],"snippet":"… Unlike many pre-trained language models that use a general corpus like Wikipedia or the Common Crawl [14, 43, 49], SPECTER was pre-trained on academic literature (sciBERT [?]) and fine-tuned with citations which …","url":["https://arxiv.org/pdf/2108.03366"]}
|
| 1059 |
{"year":"2021","title":"VOH. CoLAB at TREC 2020 Health Misinformation Track⋆","authors":["SN Gonçalves, F Martins"],"snippet":"… 2.1 Collection The collection contains 65 million news articles from CommonCrawl News4 corresponding to the period from January to April of 2020 … Recently, vinegar has been promoted as a disinfectant (...) 4 …","url":["https://trec.nist.gov/pubs/trec29/papers/vohcolab.HM.pdf"]}
|
| 1060 |
+
{"year":"2021","title":"Voted In, Standing Out: Public Response to Immigrants' Political Accession","authors":["G Grossman, S Zonszein - 2021","S Zonszein, G Grossman - American Journal of Political Science, 2024"],"snippet":"… newspapers, covering the general elections from 2010–2019.8 This data is from Common Crawl, which is an open repository of web crawl data. We assume that an article refers to a candidate's ethnic group when three conditions are met: 1) the publication date is …","url":["https://files.osf.io/v1/resources/xd4wk/providers/osfstorage/614782978ae0920335d8c84c?action=download&direct&version=2","https://onlinelibrary.wiley.com/doi/pdf/10.1111/ajps.12877"]}
|
| 1061 |
{"year":"2021","title":"We Need to Talk About Data: The Importance of Data Readiness in Natural Language Processing","authors":["F Olsson, M Sahlgren - arXiv preprint arXiv:2110.05464, 2021"],"snippet":"In this paper, we identify the state of data as being an important reason for failure in applied Natural Language Processing (NLP) projects. We argue that there is a gap between academic research in NLP and its application to problems outside …","url":["https://arxiv.org/pdf/2110.05464"]}
|
| 1062 |
{"year":"2021","title":"Web Archive Analytics","authors":["M Völske, J Bevendorff, J Kiesel, B Stein, M Fröbe… - INFORMATIK 2020, 2021"],"snippet":"… in Figure 5, beginning with the bottom-most data acquisition layerȷ Primary sources for data ingestion include web crawls and web archives, such as the aforementioned Internet Archive, the Common Crawl,13 the older … 13 …","url":["https://dl.gi.de/bitstream/handle/20.500.12116/34759/A8-1.pdf?sequence=1&isAllowed=y"]}
|
| 1063 |
{"year":"2021","title":"Web Content Authentication: A Machine Learning Approach to Identify Fake and Authentic Web Pages on Internet","authors":["J Ashok, P Badoni - … Technology for Competitive Strategies (ICTCS 2020) …"],"snippet":"… SpringerLink (Springer, 21 June 2017). www. link. springer. com/chapter/10.1007/978- 3-319-69784-0_15 42. Link to GitHub where code is hosted. https://github. com/Jkrish1011/Web-Content-Authentic ator 43. https://commoncrawl. org/","url":["http://books.google.de/books?hl=en&lr=lang_en&id=Dwo3EAAAQBAJ&oi=fnd&pg=PA85&dq=commoncrawl&ots=RCTLIhR7zF&sig=_9zgxY48BSktz0Jk-c6Y7XTbVf0"]}
|
2022.jsonl
CHANGED
|
@@ -550,7 +550,7 @@
|
|
| 550 |
{"year":"2022","title":"GPT-3 and InstructGPT: technological dystopianism, utopianism, and “Contextual” perspectives in AI ethics and industry","authors":["A Chan - AI and Ethics, 2022"],"snippet":"This paper examines the ethical solutions raised in response to OpenAI’s language model Generative Pre-trained Transformer-3 (GPT-3) a year and a half from its release. I argue that hype and fear about GPT-3, even within the Natural Language …","url":["https://link.springer.com/article/10.1007/s43681-022-00148-6"]}
|
| 551 |
{"year":"2022","title":"GPT-3 for Few-Shot Dialogue State Tracking","authors":["N Pezzotti"],"snippet":"GPT-3 (Brown et al., 2020) has attracted considerable attention due to its superior performance across a wide range of Natural Language Processing (NLP) tasks, especially with its powerful and versatile in-context few-shot learning ability. That is …","url":["https://www.mlmi.eng.cam.ac.uk/files/2020-2021_dissertations/gpt_3_for_few_shot_dialogue_state_tracking.pdf"]}
|
| 552 |
{"year":"2022","title":"GPT-NeoX-20B: An Open-Source Autoregressive Language Model","authors":["S Black, S Biderman, E Hallahan, Q Anthony, L Gao…"],"snippet":"GPT-NeoX-20B is a 20 billion parameter autoregressive language model whose weights will be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge, the largest dense autoregressive model …","url":["http://eaidata.bmk.sh/data/GPT_NeoX_20B.pdf"]}
|
| 553 |
-
{"year":"2022","title":"GPTs at Factify 2022: Prompt
|
| 554 |
{"year":"2022","title":"Grammatical Error Correction: A Survey of the State of the Art","authors":["C Bryant, Z Yuan, MR Qorib, H Cao, HT Ng, T Briscoe - arXiv preprint arXiv …, 2022"],"snippet":"Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text. The task not only includes the correction of grammatical errors, such as missing prepositions and mismatched subject-verb agreement, but …","url":["https://arxiv.org/pdf/2211.05166"]}
|
| 555 |
{"year":"2022","title":"Granular Emotion Detection for Multi-Class Sentiment Analysis in Social Media","authors":["RH Frye - 2022"],"snippet":"… XLNet was trained on many of the same or similar datasets as BERT and RoBERTa, including CommonCrawl, BooksCorpus, and English … A clean derivation of the CommonCrawl Corpus was used in pretraining XLM-R, with one …","url":["https://search.proquest.com/openview/19befd3b5703b8663bf55b49a0bbf582/1.pdf?pq-origsite=gscholar&cbl=18750&diss=y"]}
|
| 556 |
{"year":"2022","title":"GraphIC: A graph-based approach for identifying complaints from code-mixed product reviews","authors":["A Singh, S Saha - Expert Systems with Applications, 2022"],"snippet":"… Wikipedia’s and Common Crawl’s publicly available corpora for 17 Indian languages were employed for monolingual data. PMINDIA and Dakshina corpora were used to obtain translated and transliterated data for parallel segments. For code-mixed …","url":["https://www.sciencedirect.com/science/article/pii/S0957417422024630"]}
|
|
@@ -884,7 +884,7 @@
|
|
| 884 |
{"year":"2022","title":"Overview of HIPE-2022: Named Entity Recognition and Linking in Multilingual Historical Documents","authors":["A Doucet, S Clematide - … IR Meets Multilinguality, Multimodality, and Interaction …, 2022","M Ehrmann, M Romanello, S Najem-Meyer, A Doucet… - International Conference of …, 2022","S Clematide"],"snippet":"This paper presents an overview of the second edition of HIPE (Identifying Historical People, Places and other Entities), a shared task on named entity recognition and linking in multilingual historical documents. Following the success of the first CLEF-HIPE-2020 …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=LzaFEAAAQBAJ&oi=fnd&pg=PA423&dq=commoncrawl&ots=LmT-NMkkn2&sig=Q_jxi2Mi1rvfcsodMOiSk1LYkXY","https://hipe-eval.github.io/HIPE-2022/assets/pdf/HIPE_2022_LNCS_CondensedLabOverview_accepted_version.pdf","https://link.springer.com/chapter/10.1007/978-3-031-13643-6_26"]}
|
| 885 |
{"year":"2022","title":"Overview of NTCIR-16","authors":["T Yamamoto, Z Dou","Z Dou, T Yamamoto"],"snippet":"… Chuweb21 is a subset of the Common Crawl dataset and it contains 3,402,457 domains and 858,616,203 web pages. Secondly, two versions of relevance assessment are introduced: the Gold version given by the topic creators, and the …","url":["https://research.nii.ac.jp/ntcir/workshop/OnlineProceedings16/pdf/ntcir/01-NTCIR16-OV-YamamotoT-slides.pdf","https://research.nii.ac.jp/ntcir/workshop/OnlineProceedings16/pdf/ntcir/01-NTCIR16-OV-YamamotoT.pdf"]}
|
| 886 |
{"year":"2022","title":"Overview of the 2022 BUCC Shared Task: Bilingual Term Alignment in Comparable Specialized Corpora","authors":["O Adjali, E Morin, S Sharoff, R Rapp, P Zweigenbaum - LREC 2022 Workshop …, 2022"],"snippet":"The BUCC 2022 shared task addressed bilingual terminology alignment in comparable corpora. Many research groups are working on this problem using a wide variety of approaches. However, as there is no standard way to measure the …","url":["https://comparable.limsi.fr/bucc2022/BUCC2022-proceedings-20220617.pdf#page=77"]}
|
| 887 |
-
{"year":"2022","title":"Overview of Touché 2022: Argument Retrieval","authors":["A Bondarenko, M Fröbe, J Kiesel, S Syed, T Gurcke… - 2022","T Gurcke, M Beloucif, A Panchenko, C Biemann… - Experimental IR Meets …, 2022"],"snippet":"This paper is a report on the third year of the Touché lab on argument retrieval
|
| 888 |
{"year":"2022","title":"PANGUBOT: Efficient Generative Dialogue Pre-training from Pre-trained Language Model","authors":["F Mi, Y Li, Y Zeng, J Zhou, Y Wang, C Xu, L Shang… - arXiv preprint arXiv …, 2022"],"snippet":"In this paper, we introduce PANGUBOT, a Chinese pre-trained open-domain dialogue generation model based on a large pre-trained language model (PLM) PANGU-alpha (Zeng et al.,2021). Different from other pre-trained dialogue models …","url":["https://arxiv.org/pdf/2203.17090"]}
|
| 889 |
{"year":"2022","title":"Papago's Submission for the WMT21 Quality Estimation Shared Task","authors":["S Lim, H Kim, H Kim - Proceedings of the Sixth Conference on Machine …, 2021"],"snippet":"This paper describes Papago submission to the WMT 2021 Quality Estimation Task 1: Sentence-level Direct Assessment. Our multilingual Quality Estimation system explores the combination of Pretrained Language Models and Multi-task Learning …","url":["https://aclanthology.org/2021.wmt-1.98.pdf"]}
|
| 890 |
{"year":"2022","title":"Paragraph-based Transformer Pre-training for Multi-Sentence Inference","authors":["L Di Liello, S Garg, L Soldaini, A Moschitti - arXiv preprint arXiv:2205.01228, 2022"],"snippet":"Inference tasks such as answer sentence selection (AS2) or fact verification are typically solved by fine-tuning transformer-based models as individual sentence-pair classifiers. Recent studies show that these tasks benefit from modeling …","url":["https://arxiv.org/pdf/2205.01228"]}
|
|
@@ -980,8 +980,8 @@
|
|
| 980 |
{"year":"2022","title":"Research Article Qualitative Analysis of Text Summarization Techniques and Its Applications in Health Domain","authors":["AK Yadav, KV Bhadane, A Kumar, B Khan - 2022"],"snippet":"Summarizing textual information requires understanding and analyzing the linguistic, conceptual, and semantic attributes of the given information. In addition, a summary generated should succeed in incorporating the essential details and the main ideas …","url":["https://www.academia.edu/download/83013544/3411881.pdf"]}
|
| 981 |
{"year":"2022","title":"Research Background","authors":["R Richner - Auto-Grader-Auto-Grading Free Text Answers, 2022"],"snippet":"Research Background Page 1 3 Research Background Firstly, this chapter will introduce the technological background needed to understand how a state-of-the-art auto-grader may look and secondly elaborate on related work in the field of …","url":["https://link.springer.com/content/pdf/10.1007/978-3-658-39203-1_3.pdf"]}
|
| 982 |
{"year":"2022","title":"Rethinking Data Governance: A Labor-Oriented Approach","authors":["H LI, N VINCENT - 2022"],"snippet":"… Prominent examples include Flickr photos [12], Wikipedia articles [14], and the Common Crawl dataset consisting of publicly available webpages [11]. In many of such cases, users produce data without being aware of its value and potential …","url":["https://criticalautomation.org/wp-content/uploads/2022/03/li-vincent-data-governance.pdf"]}
|
| 983 |
-
{"year":"2022","title":"Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?","authors":["S Min, X Lyu, A Holtzman, M Artetxe, M Lewis… - arXiv preprint arXiv …, 2022","WMICL Work"],"snippet":"Large language models (LMs) are able to in-context learn--perform a new task via inference alone by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs. However, there has been little understanding of …","url":["https://arxiv.org/pdf/2202.12837","https://openreview.net/pdf?id=cnRGMv-Ak7u"]}
|
| 984 |
-
{"year":"2022","title":"Revisiting CCNet for
|
| 985 |
{"year":"2022","title":"Revisiting DocRED--Addressing the Overlooked False Negative Problem in Relation Extraction","authors":["Q Tan, L Xu, L Bing, HT Ng - arXiv preprint arXiv:2205.12696, 2022"],"snippet":"The DocRED dataset is one of the most popular and widely used benchmarks for document-level relation extraction (RE). It adopts a recommend-revise annotation scheme so as to have a large-scale annotated dataset. However, we find that the …","url":["https://arxiv.org/pdf/2205.12696"]}
|
| 986 |
{"year":"2022","title":"Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding","authors":["A Ghaddar, Y Wu, S Bagga, A Rashid, K Bibi… - arXiv preprint arXiv …, 2022"],"snippet":"… We collect pre-training data from the following sources: common crawl (CC), news (NEWS, ELKHAIR), and Wikipedia (WIKI).Recent … Common Crawl (CC): We used 10 shards of Common Crawl8 data from March to December 2020. After …","url":["https://arxiv.org/pdf/2205.10687"]}
|
| 987 |
{"year":"2022","title":"RigoBERTa: A State-of-the-Art Language Model For Spanish","authors":["AV Serrano, GG Subies, HM Zamorano, NA Garcia… - arXiv preprint arXiv …, 2022"],"snippet":"… OSCAR [28] [29] is a very large multilingual corpus, obtained by language classification and filtering of the CommonCrawl7. It has a portion of … For example, javascript code, automatic writings, poor automatic translations or malformed …","url":["https://arxiv.org/pdf/2205.10233"]}
|
|
@@ -1095,7 +1095,7 @@
|
|
| 1095 |
{"year":"2022","title":"Textual Inference Identification in the Malayalam Language Using Convolutional Neural Network","authors":["S Renjit, SM Idicula - Advanced Computing and Intelligent Technologies, 2022"],"snippet":"… fastText [11] has a collection of pre-trained word vectors in 157 languages, where each language word collection is from Wikipedia and common crawl. Such collections provide a good start for language processing tasks in low-resource …","url":["https://link.springer.com/chapter/10.1007/978-981-19-2980-9_20"]}
|
| 1096 |
{"year":"2022","title":"The (Moral) Language of Hate","authors":["B Kennedy, P Golazizian, J Trager, M Atari, J Hoover… - 2022"],"snippet":"… In the present study, we extract the FastText embeddings — via the pre-trained set of embeddings10, which are trained on a combination of Wikipedia and Common Crawl data — of terms in the Weaponized Word lexicon, for the 19 …","url":["https://psyarxiv.com/eqp34/download?format=pdf"]}
|
| 1097 |
{"year":"2022","title":"THE ADVANCE TECHNIQUES USED IN CYBER SECURITY FOR PHISHING DETECTION","authors":["P Chate, MS Maske, MS Maske"],"snippet":"The Internet has become a necessary part of our lives; however, it has also provided opportunities to carry out malicious activities anonymously like Phishing. Phishers try to trick their victims through social engineering or create mock-up websites to …","url":["https://www.researchgate.net/profile/Parinita-Chate-2/publication/362539921_THE_ADVANCE_TECHNIQUES_USED_IN_CYBER_SECURITY_FOR_PHISHING_DETECTION/links/62efc6874532247693889dd0/THE-ADVANCE-TECHNIQUES-USED-IN-CYBER-SECURITY-FOR-PHISHING-DETECTION.pdf"]}
|
| 1098 |
-
{"year":"2022","title":"The AISP-SJTU Translation System for WMT 2022","authors":["G Liu, Q Zhu, X Chen, R Feng, J Ren, R Wu, Q Miao… - Proceedings of the Seventh …, 2022","GLQZX Chen, RFJRR Wu, QMRWK Yu"],"snippet":"… For monolingual data, we select data from News Crawl, Common Crawl and Extended Common Crawl, and the amount of data after
|
| 1099 |
{"year":"2022","title":"The case for 4-bit precision: k-bit Inference Scaling Laws","authors":["T Dettmers, L Zettlemoyer - arXiv preprint arXiv:2212.09720, 2022"],"snippet":"… Furthermore, we find that across more than 35,000 zero-shot experiments, the Pearson correlation coefficient between The Pile Common Crawl perplexity and zero-shot performance is -0.94. … In this section, we present data for evaluation on The Pile …","url":["https://arxiv.org/pdf/2212.09720"]}
|
| 1100 |
{"year":"2022","title":"The Causal News Corpus: Annotating Causal Relations in Event Sentences from News","authors":["FA Tan, A Hürriyetoğlu, T Caselli, N Oostdijk, T Nomoto… - arXiv preprint arXiv …, 2022"],"snippet":"Despite the importance of understanding causality, corpora addressing causal relations are limited. There is a discrepancy between existing annotation guidelines of event causality and conventional causality corpora that focus more on linguistics …","url":["https://arxiv.org/pdf/2204.11714"]}
|
| 1101 |
{"year":"2022","title":"The Curious Case of Control","authors":["E Stengel-Eskin, B Van Durme - arXiv preprint arXiv:2205.12113, 2022"],"snippet":"… The training data is based on Common Crawl, though similarly to GPT-3 Davinci, the details of the training data filtering process are unclear. Relevant differences to GPT-3 are in the tokenization (which includes multi-word expressions) and use of …","url":["https://arxiv.org/pdf/2205.12113"]}
|
|
@@ -1215,10 +1215,10 @@
|
|
| 1215 |
{"year":"2022","title":"Utilizing subjectivity level to mitigate identity term bias in toxic comments classification","authors":["Z Zhao, Z Zhang, F Hopfgartner - Online Social Networks and Media, 2022"],"snippet":"Toxic comment classification models are often found biased towards identity terms, ie, terms characterizing a specific group of people such as “Muslim” and “black”. Such bias is commonly reflected in false positive predictions, ie, non-toxic comments with …","url":["https://www.sciencedirect.com/science/article/pii/S246869642200009X"]}
|
| 1216 |
{"year":"2022","title":"UTSA NLP at SemEval-2022 Task 4: An Exploration of Simple Ensembles of Transformers, Convolutional, and Recurrent Neural Networks","authors":["X Zhao, A Rios - arXiv preprint arXiv:2203.14920, 2022"],"snippet":"The act of appearing kind or helpful via the use of but having a feeling of superiority condescending and patronizing language can have have serious mental health implications to those that experience it. Thus, detecting this condescending and …","url":["https://arxiv.org/pdf/2203.14920"]}
|
| 1217 |
{"year":"2022","title":"UWaterlooMDS at the TREC 2021 Health Misinformation Track","authors":["M ABUALSAUD, IX CHEN, K GHAJAR, LNHI PHAN…"],"snippet":"… Using the hosts in M as a base, we expanded this list using the common crawl host graph 8. The hosts graph contains roughly 4 million nodes … We do this by calculating PageRank scores in a subset of the common crawl host-level graph. The …","url":["https://trec.nist.gov/pubs/trec30/papers/UwaterlooMDS-HM.pdf"]}
|
| 1218 |
-
{"year":"2022","title":"Vega-MT: The JD Explore Academy Translation System for WMT22","authors":["C Zan, K Peng, L Ding, B Qiu, B Liu, S He, Q Lu… - arXiv preprint arXiv …, 2022","C Zanℜ, K Peng, L Dingℜ, B Qiu, B Liu, S He, Q Lu…"],"snippet":"We describe the JD Explore Academy
|
| 1219 |
{"year":"2022","title":"Vicomtech at DA-VINCIS: Detection of Aggressive and Violent Incidents from Social Media in Spanish","authors":["P Turón, N Perez, A García-Pablos, E Zotova… - 2022"],"snippet":"This paper describes the participation of the Vicomtech NLP team in the DA-VINCIS shared task. This shared task is focused on mentions of violent events in Spanish tweets, and proposes two subtasks: first, detecting whether a violent incident is …","url":["http://ceur-ws.org/Vol-3202/davincis-paper4.pdf"]}
|
| 1220 |
{"year":"2022","title":"Video Games as a Corpus: Sentiment Analysis using Fallout New Vegas Dialog","authors":["M Hämäläinen, K Alnajjar, T Poibeau - 2022"],"snippet":"We present a method for extracting a multilingual sentiment annotated dialog data set from Fallout New Vegas. The game developers have preannotated every line of dialog in the game in one of the 8 different sentiments: anger, disgust, fear, happy …","url":["https://www.researchgate.net/profile/Mika-Haemaelaeinen/publication/363367422_Video_Games_as_a_Corpus_Sentiment_Analysis_using_Fallout_New_Vegas_Dialog/links/6319cc1870cc936cd3f1ae29/Video-Games-as-a-Corpus-Sentiment-Analysis-using-Fallout-New-Vegas-Dialog.pdf"]}
|
| 1221 |
-
{"year":"2022","title":"Vietnamese
|
| 1222 |
{"year":"2022","title":"Visualization of 2D fractal structures associated with the Riemann zeta function","authors":["I Belovas, M Sabaliauskas, L Kuzma - DAMSS: 13th conference on data analysis …, 2022"],"snippet":"DAMSS-2022 is the 13th International Conference on Data Analysis Methods for Software Systems, held in Druskininkai, Lithuania. Every year at the same place and time. The exception was in 2020, when the world was gripped by the Covid-19 …","url":["https://epublications.vu.lt/object/elaba:147807019/147807019.pdf"]}
|
| 1223 |
{"year":"2022","title":"Visuelle Exploration von indirekten Befangenheiten bei der Verarbeitung natürlicher Sprachen durch Transformer Modelle","authors":["JLAD Petit-Frere"],"snippet":"… Common Crawl corpora are composed of text dataset collected from web pages and contains several billion tokens … This metric was used to measure the biases within the Common Crawl pre-trained GloVe model, an ELMo model, and the bert-base …","url":["https://www.cg.tuwien.ac.at/research/publications/2022/louis-alexandre_dit_petit-frere-2022-veo/louis-alexandre_dit_petit-frere-2022-veo-thesis.pdf"]}
|
| 1224 |
{"year":"2022","title":"ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation","authors":["L Phan, H Tran, H Nguyen, TH Trinh - arXiv preprint arXiv:2205.06457, 2022"],"snippet":"We present ViT5, a pretrained Transformer-based encoder-decoder model for the Vietnamese language. With T5-style self-supervised pretraining, ViT5 is trained on a large corpus of high-quality and diverse Vietnamese texts. We benchmark ViT5 on …","url":["https://arxiv.org/pdf/2205.06457"]}
|
|
|
|
| 550 |
{"year":"2022","title":"GPT-3 and InstructGPT: technological dystopianism, utopianism, and “Contextual” perspectives in AI ethics and industry","authors":["A Chan - AI and Ethics, 2022"],"snippet":"This paper examines the ethical solutions raised in response to OpenAI’s language model Generative Pre-trained Transformer-3 (GPT-3) a year and a half from its release. I argue that hype and fear about GPT-3, even within the Natural Language …","url":["https://link.springer.com/article/10.1007/s43681-022-00148-6"]}
|
| 551 |
{"year":"2022","title":"GPT-3 for Few-Shot Dialogue State Tracking","authors":["N Pezzotti"],"snippet":"GPT-3 (Brown et al., 2020) has attracted considerable attention due to its superior performance across a wide range of Natural Language Processing (NLP) tasks, especially with its powerful and versatile in-context few-shot learning ability. That is …","url":["https://www.mlmi.eng.cam.ac.uk/files/2020-2021_dissertations/gpt_3_for_few_shot_dialogue_state_tracking.pdf"]}
|
| 552 |
{"year":"2022","title":"GPT-NeoX-20B: An Open-Source Autoregressive Language Model","authors":["S Black, S Biderman, E Hallahan, Q Anthony, L Gao…"],"snippet":"GPT-NeoX-20B is a 20 billion parameter autoregressive language model whose weights will be made freely and openly available to the public through a permissive license. It is, to the best of our knowledge, the largest dense autoregressive model …","url":["http://eaidata.bmk.sh/data/GPT_NeoX_20B.pdf"]}
|
| 553 |
+
{"year":"2022","title":"GPTs at Factify 2022: Prompt aided fact-verification","authors":["PK Sahu, S Aggarwal, T Gupta, G Das - arXiv preprint arXiv:2206.14913, 2022","S Aggarwal, P Sahu, T Gupta, G Das - Proceedings of De-Factify: Workshop on …, 2022"],"snippet":"One of the most pressing societal issues is the fight against false news. The false claims, as difficult as they are to expose, create a lot of damage. To tackle with the problem, fact verification becomes crucial and thus has been a topic of interest …","url":["http://ceur-ws.org/Vol-3199/paper11.pdf","https://arxiv.org/pdf/2206.14913"]}
|
| 554 |
{"year":"2022","title":"Grammatical Error Correction: A Survey of the State of the Art","authors":["C Bryant, Z Yuan, MR Qorib, H Cao, HT Ng, T Briscoe - arXiv preprint arXiv …, 2022"],"snippet":"Grammatical Error Correction (GEC) is the task of automatically detecting and correcting errors in text. The task not only includes the correction of grammatical errors, such as missing prepositions and mismatched subject-verb agreement, but …","url":["https://arxiv.org/pdf/2211.05166"]}
|
| 555 |
{"year":"2022","title":"Granular Emotion Detection for Multi-Class Sentiment Analysis in Social Media","authors":["RH Frye - 2022"],"snippet":"… XLNet was trained on many of the same or similar datasets as BERT and RoBERTa, including CommonCrawl, BooksCorpus, and English … A clean derivation of the CommonCrawl Corpus was used in pretraining XLM-R, with one …","url":["https://search.proquest.com/openview/19befd3b5703b8663bf55b49a0bbf582/1.pdf?pq-origsite=gscholar&cbl=18750&diss=y"]}
|
| 556 |
{"year":"2022","title":"GraphIC: A graph-based approach for identifying complaints from code-mixed product reviews","authors":["A Singh, S Saha - Expert Systems with Applications, 2022"],"snippet":"… Wikipedia’s and Common Crawl’s publicly available corpora for 17 Indian languages were employed for monolingual data. PMINDIA and Dakshina corpora were used to obtain translated and transliterated data for parallel segments. For code-mixed …","url":["https://www.sciencedirect.com/science/article/pii/S0957417422024630"]}
|
|
|
|
| 884 |
{"year":"2022","title":"Overview of HIPE-2022: Named Entity Recognition and Linking in Multilingual Historical Documents","authors":["A Doucet, S Clematide - … IR Meets Multilinguality, Multimodality, and Interaction …, 2022","M Ehrmann, M Romanello, S Najem-Meyer, A Doucet… - International Conference of …, 2022","S Clematide"],"snippet":"This paper presents an overview of the second edition of HIPE (Identifying Historical People, Places and other Entities), a shared task on named entity recognition and linking in multilingual historical documents. Following the success of the first CLEF-HIPE-2020 …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=LzaFEAAAQBAJ&oi=fnd&pg=PA423&dq=commoncrawl&ots=LmT-NMkkn2&sig=Q_jxi2Mi1rvfcsodMOiSk1LYkXY","https://hipe-eval.github.io/HIPE-2022/assets/pdf/HIPE_2022_LNCS_CondensedLabOverview_accepted_version.pdf","https://link.springer.com/chapter/10.1007/978-3-031-13643-6_26"]}
|
| 885 |
{"year":"2022","title":"Overview of NTCIR-16","authors":["T Yamamoto, Z Dou","Z Dou, T Yamamoto"],"snippet":"… Chuweb21 is a subset of the Common Crawl dataset and it contains 3,402,457 domains and 858,616,203 web pages. Secondly, two versions of relevance assessment are introduced: the Gold version given by the topic creators, and the …","url":["https://research.nii.ac.jp/ntcir/workshop/OnlineProceedings16/pdf/ntcir/01-NTCIR16-OV-YamamotoT-slides.pdf","https://research.nii.ac.jp/ntcir/workshop/OnlineProceedings16/pdf/ntcir/01-NTCIR16-OV-YamamotoT.pdf"]}
|
| 886 |
{"year":"2022","title":"Overview of the 2022 BUCC Shared Task: Bilingual Term Alignment in Comparable Specialized Corpora","authors":["O Adjali, E Morin, S Sharoff, R Rapp, P Zweigenbaum - LREC 2022 Workshop …, 2022"],"snippet":"The BUCC 2022 shared task addressed bilingual terminology alignment in comparable corpora. Many research groups are working on this problem using a wide variety of approaches. However, as there is no standard way to measure the …","url":["https://comparable.limsi.fr/bucc2022/BUCC2022-proceedings-20220617.pdf#page=77"]}
|
| 887 |
+
{"year":"2022","title":"Overview of Touché 2022: Argument Retrieval","authors":["A Bondarenko, M Fröbe, J Kiesel, S Syed, T Gurcke… - 2022","T Gurcke, M Beloucif, A Panchenko, C Biemann… - Experimental IR Meets …, 2022"],"snippet":"This paper is a condensed report on the third year of the Touché lab on argument retrieval held at CLEF 2022. With the goal to foster and support the development of technologies for argument mining and argument analysis, we organized three …","url":["http://ceur-ws.org/Vol-3180/paper-247.pdf","https://books.google.de/books?hl=en&lr=lang_en&id=LzaFEAAAQBAJ&oi=fnd&pg=PA311&dq=commoncrawl&ots=LmT-NMkkn2&sig=yArB4HgQifPyU4Ny0t18ZGgSD1E"]}
|
| 888 |
{"year":"2022","title":"PANGUBOT: Efficient Generative Dialogue Pre-training from Pre-trained Language Model","authors":["F Mi, Y Li, Y Zeng, J Zhou, Y Wang, C Xu, L Shang… - arXiv preprint arXiv …, 2022"],"snippet":"In this paper, we introduce PANGUBOT, a Chinese pre-trained open-domain dialogue generation model based on a large pre-trained language model (PLM) PANGU-alpha (Zeng et al.,2021). Different from other pre-trained dialogue models …","url":["https://arxiv.org/pdf/2203.17090"]}
|
| 889 |
{"year":"2022","title":"Papago's Submission for the WMT21 Quality Estimation Shared Task","authors":["S Lim, H Kim, H Kim - Proceedings of the Sixth Conference on Machine …, 2021"],"snippet":"This paper describes Papago submission to the WMT 2021 Quality Estimation Task 1: Sentence-level Direct Assessment. Our multilingual Quality Estimation system explores the combination of Pretrained Language Models and Multi-task Learning …","url":["https://aclanthology.org/2021.wmt-1.98.pdf"]}
|
| 890 |
{"year":"2022","title":"Paragraph-based Transformer Pre-training for Multi-Sentence Inference","authors":["L Di Liello, S Garg, L Soldaini, A Moschitti - arXiv preprint arXiv:2205.01228, 2022"],"snippet":"Inference tasks such as answer sentence selection (AS2) or fact verification are typically solved by fine-tuning transformer-based models as individual sentence-pair classifiers. Recent studies show that these tasks benefit from modeling …","url":["https://arxiv.org/pdf/2205.01228"]}
|
|
|
|
| 980 |
{"year":"2022","title":"Research Article Qualitative Analysis of Text Summarization Techniques and Its Applications in Health Domain","authors":["AK Yadav, KV Bhadane, A Kumar, B Khan - 2022"],"snippet":"Summarizing textual information requires understanding and analyzing the linguistic, conceptual, and semantic attributes of the given information. In addition, a summary generated should succeed in incorporating the essential details and the main ideas …","url":["https://www.academia.edu/download/83013544/3411881.pdf"]}
|
| 981 |
{"year":"2022","title":"Research Background","authors":["R Richner - Auto-Grader-Auto-Grading Free Text Answers, 2022"],"snippet":"Research Background Page 1 3 Research Background Firstly, this chapter will introduce the technological background needed to understand how a state-of-the-art auto-grader may look and secondly elaborate on related work in the field of …","url":["https://link.springer.com/content/pdf/10.1007/978-3-658-39203-1_3.pdf"]}
|
| 982 |
{"year":"2022","title":"Rethinking Data Governance: A Labor-Oriented Approach","authors":["H LI, N VINCENT - 2022"],"snippet":"… Prominent examples include Flickr photos [12], Wikipedia articles [14], and the Common Crawl dataset consisting of publicly available webpages [11]. In many of such cases, users produce data without being aware of its value and potential …","url":["https://criticalautomation.org/wp-content/uploads/2022/03/li-vincent-data-governance.pdf"]}
|
| 983 |
+
{"year":"2022","title":"Rethinking the Role of Demonstrations: What Makes In-Context Learning Work?","authors":["S Min, X Lyu, A Holtzman, M Artetxe, M Lewis… - arXiv preprint arXiv …, 2022","WMICL Work"],"snippet":"Large language models (LMs) are able to in-context learn -- perform a new task via inference alone by conditioning on a few input-label pairs (demonstrations) and making predictions for new inputs. However, there has been little understanding of …","url":["https://arxiv.org/pdf/2202.12837","https://openreview.net/pdf?id=cnRGMv-Ak7u"]}
|
| 984 |
+
{"year":"2022","title":"Revisiting CCNet for quality measurements in Galician","authors":["JE Ortega, I de-Dios-Flores, JR Pichel, P Gamallo - International Conference on …, 2022","JR Pichel, P Gamallo","P Gamallo - … Processing of the Portuguese Language: 15th …"],"snippet":"… In this article, we present our findings on reproducing the introduction of the common crawl corpus by Facebook, known as the CCNet … at a given time – the work on CCNet is based on the CommonCrawl dataset from February 2019 which is …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=Df9kEAAAQBAJ&oi=fnd&pg=PA407&dq=commoncrawl&ots=UGnguc7I3u&sig=8yrKGw8bPU5TtHaLaA-1I63OUas","https://gramatica.usc.es/~gamallo/artigos-web/PROPOR2022.pdf","https://link.springer.com/chapter/10.1007/978-3-030-98305-5_38"]}
|
| 985 |
{"year":"2022","title":"Revisiting DocRED--Addressing the Overlooked False Negative Problem in Relation Extraction","authors":["Q Tan, L Xu, L Bing, HT Ng - arXiv preprint arXiv:2205.12696, 2022"],"snippet":"The DocRED dataset is one of the most popular and widely used benchmarks for document-level relation extraction (RE). It adopts a recommend-revise annotation scheme so as to have a large-scale annotated dataset. However, we find that the …","url":["https://arxiv.org/pdf/2205.12696"]}
|
| 986 |
{"year":"2022","title":"Revisiting Pre-trained Language Models and their Evaluation for Arabic Natural Language Understanding","authors":["A Ghaddar, Y Wu, S Bagga, A Rashid, K Bibi… - arXiv preprint arXiv …, 2022"],"snippet":"… We collect pre-training data from the following sources: common crawl (CC), news (NEWS, ELKHAIR), and Wikipedia (WIKI).Recent … Common Crawl (CC): We used 10 shards of Common Crawl8 data from March to December 2020. After …","url":["https://arxiv.org/pdf/2205.10687"]}
|
| 987 |
{"year":"2022","title":"RigoBERTa: A State-of-the-Art Language Model For Spanish","authors":["AV Serrano, GG Subies, HM Zamorano, NA Garcia… - arXiv preprint arXiv …, 2022"],"snippet":"… OSCAR [28] [29] is a very large multilingual corpus, obtained by language classification and filtering of the CommonCrawl7. It has a portion of … For example, javascript code, automatic writings, poor automatic translations or malformed …","url":["https://arxiv.org/pdf/2205.10233"]}
|
|
|
|
| 1095 |
{"year":"2022","title":"Textual Inference Identification in the Malayalam Language Using Convolutional Neural Network","authors":["S Renjit, SM Idicula - Advanced Computing and Intelligent Technologies, 2022"],"snippet":"… fastText [11] has a collection of pre-trained word vectors in 157 languages, where each language word collection is from Wikipedia and common crawl. Such collections provide a good start for language processing tasks in low-resource …","url":["https://link.springer.com/chapter/10.1007/978-981-19-2980-9_20"]}
|
| 1096 |
{"year":"2022","title":"The (Moral) Language of Hate","authors":["B Kennedy, P Golazizian, J Trager, M Atari, J Hoover… - 2022"],"snippet":"… In the present study, we extract the FastText embeddings — via the pre-trained set of embeddings10, which are trained on a combination of Wikipedia and Common Crawl data — of terms in the Weaponized Word lexicon, for the 19 …","url":["https://psyarxiv.com/eqp34/download?format=pdf"]}
|
| 1097 |
{"year":"2022","title":"THE ADVANCE TECHNIQUES USED IN CYBER SECURITY FOR PHISHING DETECTION","authors":["P Chate, MS Maske, MS Maske"],"snippet":"The Internet has become a necessary part of our lives; however, it has also provided opportunities to carry out malicious activities anonymously like Phishing. Phishers try to trick their victims through social engineering or create mock-up websites to …","url":["https://www.researchgate.net/profile/Parinita-Chate-2/publication/362539921_THE_ADVANCE_TECHNIQUES_USED_IN_CYBER_SECURITY_FOR_PHISHING_DETECTION/links/62efc6874532247693889dd0/THE-ADVANCE-TECHNIQUES-USED-IN-CYBER-SECURITY-FOR-PHISHING-DETECTION.pdf"]}
|
| 1098 |
+
{"year":"2022","title":"The AISP-SJTU Translation System for WMT 2022","authors":["G Liu, Q Zhu, X Chen, R Feng, J Ren, R Wu, Q Miao… - Proceedings of the Seventh …, 2022","GLQZX Chen, RFJRR Wu, QMRWK Yu"],"snippet":"… For monolingual data, we select data from News Crawl, Common Crawl and Extended Common Crawl, and the amount of data after processing is shown in Table 2. For generating pseudo-data, we use all source monolingual to generate …","url":["https://aclanthology.org/2022.wmt-1.24.pdf","https://www.statmt.org/wmt22/pdf/2022.wmt-1.24.pdf"]}
|
| 1099 |
{"year":"2022","title":"The case for 4-bit precision: k-bit Inference Scaling Laws","authors":["T Dettmers, L Zettlemoyer - arXiv preprint arXiv:2212.09720, 2022"],"snippet":"… Furthermore, we find that across more than 35,000 zero-shot experiments, the Pearson correlation coefficient between The Pile Common Crawl perplexity and zero-shot performance is -0.94. … In this section, we present data for evaluation on The Pile …","url":["https://arxiv.org/pdf/2212.09720"]}
|
| 1100 |
{"year":"2022","title":"The Causal News Corpus: Annotating Causal Relations in Event Sentences from News","authors":["FA Tan, A Hürriyetoğlu, T Caselli, N Oostdijk, T Nomoto… - arXiv preprint arXiv …, 2022"],"snippet":"Despite the importance of understanding causality, corpora addressing causal relations are limited. There is a discrepancy between existing annotation guidelines of event causality and conventional causality corpora that focus more on linguistics …","url":["https://arxiv.org/pdf/2204.11714"]}
|
| 1101 |
{"year":"2022","title":"The Curious Case of Control","authors":["E Stengel-Eskin, B Van Durme - arXiv preprint arXiv:2205.12113, 2022"],"snippet":"… The training data is based on Common Crawl, though similarly to GPT-3 Davinci, the details of the training data filtering process are unclear. Relevant differences to GPT-3 are in the tokenization (which includes multi-word expressions) and use of …","url":["https://arxiv.org/pdf/2205.12113"]}
|
|
|
|
| 1215 |
{"year":"2022","title":"Utilizing subjectivity level to mitigate identity term bias in toxic comments classification","authors":["Z Zhao, Z Zhang, F Hopfgartner - Online Social Networks and Media, 2022"],"snippet":"Toxic comment classification models are often found biased towards identity terms, ie, terms characterizing a specific group of people such as “Muslim” and “black”. Such bias is commonly reflected in false positive predictions, ie, non-toxic comments with …","url":["https://www.sciencedirect.com/science/article/pii/S246869642200009X"]}
|
| 1216 |
{"year":"2022","title":"UTSA NLP at SemEval-2022 Task 4: An Exploration of Simple Ensembles of Transformers, Convolutional, and Recurrent Neural Networks","authors":["X Zhao, A Rios - arXiv preprint arXiv:2203.14920, 2022"],"snippet":"The act of appearing kind or helpful via the use of but having a feeling of superiority condescending and patronizing language can have have serious mental health implications to those that experience it. Thus, detecting this condescending and …","url":["https://arxiv.org/pdf/2203.14920"]}
|
| 1217 |
{"year":"2022","title":"UWaterlooMDS at the TREC 2021 Health Misinformation Track","authors":["M ABUALSAUD, IX CHEN, K GHAJAR, LNHI PHAN…"],"snippet":"… Using the hosts in M as a base, we expanded this list using the common crawl host graph 8. The hosts graph contains roughly 4 million nodes … We do this by calculating PageRank scores in a subset of the common crawl host-level graph. The …","url":["https://trec.nist.gov/pubs/trec30/papers/UwaterlooMDS-HM.pdf"]}
|
| 1218 |
+
{"year":"2022","title":"Vega-MT: The JD Explore Academy Translation System for WMT22","authors":["C Zan, K Peng, L Ding, B Qiu, B Liu, S He, Q Lu… - arXiv preprint arXiv …, 2022","C Zanℜ, K Peng, L Dingℜ, B Qiu, B Liu, S He, Q Lu…"],"snippet":"We describe the JD Explore Academy’s submission of the WMT 2022 shared task on general machine translation. We participated in all high-resource tracks and one mediumresource track, including Chinese↔ English (Zh↔ En), German↔ English(De↔ …","url":["https://arxiv.org/pdf/2209.09444","https://www.statmt.org/wmt22/pdf/2022.wmt-1.37.pdf"]}
|
| 1219 |
{"year":"2022","title":"Vicomtech at DA-VINCIS: Detection of Aggressive and Violent Incidents from Social Media in Spanish","authors":["P Turón, N Perez, A García-Pablos, E Zotova… - 2022"],"snippet":"This paper describes the participation of the Vicomtech NLP team in the DA-VINCIS shared task. This shared task is focused on mentions of violent events in Spanish tweets, and proposes two subtasks: first, detecting whether a violent incident is …","url":["http://ceur-ws.org/Vol-3202/davincis-paper4.pdf"]}
|
| 1220 |
{"year":"2022","title":"Video Games as a Corpus: Sentiment Analysis using Fallout New Vegas Dialog","authors":["M Hämäläinen, K Alnajjar, T Poibeau - 2022"],"snippet":"We present a method for extracting a multilingual sentiment annotated dialog data set from Fallout New Vegas. The game developers have preannotated every line of dialog in the game in one of the 8 different sentiments: anger, disgust, fear, happy …","url":["https://www.researchgate.net/profile/Mika-Haemaelaeinen/publication/363367422_Video_Games_as_a_Corpus_Sentiment_Analysis_using_Fallout_New_Vegas_Dialog/links/6319cc1870cc936cd3f1ae29/Video-Games-as-a-Corpus-Sentiment-Analysis-using-Fallout-New-Vegas-Dialog.pdf"]}
|
| 1221 |
+
{"year":"2022","title":"Vietnamese hate and offensive detection using PhoBERT-CNN and social media streaming data","authors":["K Quoc Tran, A Trong Nguyen, PG Hoang, CD Luu… - Neural Computing and …, 2022","KQ Tran, AT Nguyen, PG Hoang, CD Luu, TH Do… - arXiv preprint arXiv …, 2022"],"snippet":"… XLM-RoBERTa (XLM-R) [28]: is a multilingual model trained using over two terabytes of cleaned and filtered CommonCrawl data. Upsampling low-resource languages during training and vocabulary generation, generating a more extensive …","url":["https://arxiv.org/pdf/2206.00524","https://link.springer.com/article/10.1007/s00521-022-07745-w"]}
|
| 1222 |
{"year":"2022","title":"Visualization of 2D fractal structures associated with the Riemann zeta function","authors":["I Belovas, M Sabaliauskas, L Kuzma - DAMSS: 13th conference on data analysis …, 2022"],"snippet":"DAMSS-2022 is the 13th International Conference on Data Analysis Methods for Software Systems, held in Druskininkai, Lithuania. Every year at the same place and time. The exception was in 2020, when the world was gripped by the Covid-19 …","url":["https://epublications.vu.lt/object/elaba:147807019/147807019.pdf"]}
|
| 1223 |
{"year":"2022","title":"Visuelle Exploration von indirekten Befangenheiten bei der Verarbeitung natürlicher Sprachen durch Transformer Modelle","authors":["JLAD Petit-Frere"],"snippet":"… Common Crawl corpora are composed of text dataset collected from web pages and contains several billion tokens … This metric was used to measure the biases within the Common Crawl pre-trained GloVe model, an ELMo model, and the bert-base …","url":["https://www.cg.tuwien.ac.at/research/publications/2022/louis-alexandre_dit_petit-frere-2022-veo/louis-alexandre_dit_petit-frere-2022-veo-thesis.pdf"]}
|
| 1224 |
{"year":"2022","title":"ViT5: Pretrained Text-to-Text Transformer for Vietnamese Language Generation","authors":["L Phan, H Tran, H Nguyen, TH Trinh - arXiv preprint arXiv:2205.06457, 2022"],"snippet":"We present ViT5, a pretrained Transformer-based encoder-decoder model for the Vietnamese language. With T5-style self-supervised pretraining, ViT5 is trained on a large corpus of high-quality and diverse Vietnamese texts. We benchmark ViT5 on …","url":["https://arxiv.org/pdf/2205.06457"]}
|
2023.jsonl
CHANGED
|
@@ -21,7 +21,7 @@
|
|
| 21 |
{"year":"2023","title":"A BERT-Based Model for Financial Social Media Sentiment Analysis","authors":["J Delgadillo, J Kinyua, C Mutigwe - International Journal of Cognitive and Language …, 2023"],"snippet":"The purpose of sentiment analysis is to determine the sentiment strength (eg, positive, negative, neutral) from a textual source for good decision-making. Natural Language Processing (NLP) in domains such as financial markets requires …","url":["https://publications.waset.org/10012944/a-bert-based-model-for-financial-social-media-sentiment-analysis"]}
|
| 22 |
{"year":"2023","title":"A Brief Overview of ChatGPT: The History, Status Quo and Potential Future Development","authors":["T Wu, S He, J Liu, S Sun, K Liu, QL Han, Y Tang - IEEE/CAA Journal of Automatica …, 2023"],"snippet":"ChatGPT, an artificial intelligence generated content (AIGC) model developed by OpenAI, has attracted world-wide attention for its capability of dealing with challenging language understanding and generation tasks in the form of …","url":["https://www.ieee-jas.net/en/article/doi/10.1109/JAS.2023.123618"]}
|
| 23 |
{"year":"2023","title":"A Case Study Analysis of Google Smart Compose and Its Effects on the Student Writing Process From the Student and Teacher Perspectives","authors":["H Bryant - 2023"],"snippet":"This qualitative case study addresses the rise in popularity of the use of predictive text programs within the K-12 educational environment. A problem exists in the discrepancy between the widespread availability of Google Smart Compose within …","url":["https://search.proquest.com/openview/f62cdbf5839f7b94d7eaf8f39e8cdf1a/1?pq-origsite=gscholar&cbl=18750&diss=y"]}
|
| 24 |
-
{"year":"2023","title":"A Cohesive Distillation Architecture for Neural Language Models","authors":["JP Wahle - arXiv preprint arXiv:2301.08130, 2023","LR Terry - 2023"],"snippet":"A recent trend in Natural Language Processing is the exponential growth in Language Model (LM) size, which prevents research groups without a necessary hardware infrastructure from
|
| 25 |
{"year":"2023","title":"A Comparative Performance Evaluation of Algorithms for the Analysis and Recognition of Emotional Content","authors":["K Kyritsis, N Spatiotis, I Perikos, M Paraskevas - 2023"],"snippet":"Sentiment Analysis is highly valuable in Natural Language Processing (NLP) across domains, processing and evaluating sentiment in text for emotional understanding. This technology has diverse applications, including social media monitoring, brand …","url":["https://www.intechopen.com/online-first/87923"]}
|
| 26 |
{"year":"2023","title":"A Comparative Study of Code Generation using ChatGPT 3.5 across 10 Programming Languages","authors":["A Buscemi - arXiv preprint arXiv:2308.04477, 2023"],"snippet":"… OpenAI employed a dataset known as the Common Crawl [24], a publicly accessible collection of billions of web pages, making it as one of the most extended text databases currently accessible. It is to be noted that the selection of the dataset …","url":["https://arxiv.org/pdf/2308.04477"]}
|
| 27 |
{"year":"2023","title":"A Comparative Study of Pre-trained Language Models to Filter Informative Code-mixed Data on Social Media during Disasters","authors":["H Salemi, Y Senarath, H Purohit"],"snippet":"… A multi-lingual language model trained using masked language modeling on 2.5 TB of newly created and cleaned CommonCrawl data. … XLM-R: A multilingual version of RoBERTa that is pre-trained on 2.5TB of filtered CommonCrawl data …","url":["https://idl.iscram.org/files/salemi/2023/2576_Salemi_etal2023.pdf"]}
|
|
@@ -198,7 +198,7 @@
|
|
| 198 |
{"year":"2023","title":"An Investigation of Representation and Allocation Harms in Contrastive Learning","authors":["S Maity, M Agarwal, M Yurochkin, Y Sun - arXiv preprint arXiv:2310.01583, 2023"],"snippet":"… In this experiment, we study the potential harms of CL applied to data obtained from Common Crawl, a popular source of text data for self-… 2019) which consists of around 400k online biographies in English extracted from the Common Crawl data …","url":["https://arxiv.org/pdf/2310.01583"]}
|
| 199 |
{"year":"2023","title":"An Open Source Data Contamination Report for Llama Series Models","authors":["Y Li - arXiv preprint arXiv:2310.17589, 2023"],"snippet":"… Our approach utilises a search engine and the Common Crawl index, avoiding the need to host the full 2017-2020 Common Crawl dumps locally. This massive training data would incur prohibitive computational requirements. However, relying …","url":["https://arxiv.org/pdf/2310.17589"]}
|
| 200 |
{"year":"2023","title":"An Urgency for Inclusivity: Redesigning Datasets for Improved Representation of LGBTQ+ Identity Terms in Artificial Intelligence (AI)","authors":["L Wang - 2023"],"snippet":"… The Common Crawl Dataset stands as a vital resource in the realm of AI model training, … However, despite the Common Crawl's prominence, handling LGBTQ+ identity terms in AI … ’s C4 dataset, a filtered version of the Common Crawl.The …","url":["https://laniwang.com/LaniWangFinalPaper.CORRECTFORMATTING.pdf"]}
|
| 201 |
-
{"year":"2023","title":"Analogical Proportions and Creativity: A Preliminary Study","authors":["S Afantenos, H Prade, LC Bernardes - arXiv preprint arXiv:2310.13500, 2023"],"snippet":"Analogical proportions are statements of the form \"$a$ is to $b$ as $c$ is to $d$\", which expresses that the comparisons of the elements in pair $(a, b)$ and in pair $(c, d)$ yield similar results. Analogical proportions are creative in the sense that given 3 …","url":["https://arxiv.org/pdf/2310.13500"]}
|
| 202 |
{"year":"2023","title":"ANALOGICAL--A New Benchmark for Analogy of Long Text for Large Language Models","authors":["T Wijesiriwardene, R Wickramarachchi, BG Gajera… - arXiv preprint arXiv …, 2023"],"snippet":"… In addition three other corpora containing news articles, web content, and a filtered subset of the CommonCrawl corpus were used. The training approach of RoBERTa differs from BERT as follows. RoBERTa modifies the MLM task by moving …","url":["https://arxiv.org/pdf/2305.05050"]}
|
| 203 |
{"year":"2023","title":"Analysing Cross-Lingual Transfer in Low-Resourced African Named Entity Recognition","authors":["M Beukman, M Fokam - arXiv preprint arXiv:2309.05311, 2023"],"snippet":"Transfer learning has led to large gains in performance for nearly all NLP tasks while making downstream models easier and faster to train. This has also been extended to low-resourced languages, with some success. We investigate the …","url":["https://arxiv.org/pdf/2309.05311"]}
|
| 204 |
{"year":"2023","title":"ANALYSIS OF A DECISION SUPPORT SYSTEM USING AHP FOR FOOD AND RESTAURANT SELECTION BASED ON THE USER'S FOOD CRAVINGS AND …","authors":["D Dyondra, J Purnama, M Galinium - SGU Online Thesis Submission, 2023"],"snippet":"The purpose of this research is to develop a decision support system (DSS) using the AHP algorithm for selecting restaurants based on the user's food cravings and location in Jakarta. The data for the DSS was gathered by scraping restaurant data …","url":["https://thesis.sgu.ac.id/index.php/ots/article/download/4498/832"]}
|
|
@@ -369,7 +369,7 @@
|
|
| 369 |
{"year":"2023","title":"CamPros at CASE 2022 Task 1: Transformer-based Multilingual Protest News Detection","authors":["N Kumari, M Anand, T Mohan, P Kumaraguru… - Proceedings of the 5th …, 2022"],"snippet":"Socio-political protests often lead to grave consequences when they occur. The early detection of such protests is very important for taking early precautionary measures. However, the main shortcoming of protest event detection is the scarcity …","url":["https://aclanthology.org/2022.case-1.24.pdf"]}
|
| 370 |
{"year":"2023","title":"Can ChatGPT Replace Traditional KBQA Models? An In-Depth Analysis of the Question Answering Performance of the GPT LLM Family","authors":["Y Tan, D Min, Y Li, W Li, N Hu, Y Chen, G Qi - International Semantic Web …, 2023"],"snippet":"ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge. Therefore, there is growing interest in exploring whether …","url":["https://link.springer.com/chapter/10.1007/978-3-031-47240-4_19"]}
|
| 371 |
{"year":"2023","title":"Can Large Language Models Capture Dissenting Human Voices?","authors":["N Lee, N An, J Thorne - Proceedings of the 2023 Conference on Empirical …, 2023"],"snippet":"Large language models (LLMs) have shown impressive achievements in solving a broad range of tasks. Augmented by instruction fine-tuning, LLMs have also been shown to generalize in zero-shot settings as well. However, whether LLMs closely …","url":["https://aclanthology.org/2023.emnlp-main.278.pdf"]}
|
| 372 |
-
{"year":"2023","title":"Can Large Language Models Generate Outpatient Clinic Letters at First Consultation That Incorporate Complication Profiles From UK and USA Aesthetic Plastic …","authors":["RHR Roberts, SR Ali, TD Dobbs, IS Whitaker - Aesthetic Surgery Journal Open …, 2023","TD Dobbs, IS Whitaker, MA Cantab - Aesthetic Surgery Journal, 2024"],"snippet":"
|
| 373 |
{"year":"2023","title":"Can Peanuts Fall in Love with Distributional Semantics?","authors":["JA Michaelov, S Coulson, BK Bergen - arXiv preprint arXiv:2301.08731, 2023"],"snippet":"The context in which a sentence appears can drastically alter our expectations about upcoming words - for example, following a short story involving an anthropomorphic peanut, experimental participants are more likely to expect the sentence 'the peanut …","url":["https://arxiv.org/pdf/2301.08731"]}
|
| 374 |
{"year":"2023","title":"Can we Debunk Disinformation by Leveraging SpeakerCredibility and Perplexity Measures?","authors":["AFUR Khilji, A Sachan, D Lachi, AV Singh, TD Singh - 2023"],"snippet":"In the present age, őghting disinformation is the main concern after pandemic. The exponential growth of fake news and its role in deteriorating general public trust and democratic standards certainly calls for counter-combat approaches. The prediction …","url":["https://www.researchsquare.com/article/rs-2764182/latest.pdf"]}
|
| 375 |
{"year":"2023","title":"cantnlp@ LT-EDI-2023: Homophobia/Transphobia Detection in Social Media Comments using Spatio-Temporally Retrained Language Models","authors":["S Wong, M Durward, B Adams, J Dunn - Proceedings of the Third Workshop on …, 2023"],"snippet":"This paper describes our multiclass classification system developed as part of the LT-EDI@ RANLP-2023 shared task. We used a BERT-based language model to detect homophobic and transphobic content in social media comments across five …","url":["https://aclanthology.org/2023.ltedi-1.15.pdf"]}
|
|
@@ -448,7 +448,7 @@
|
|
| 448 |
{"year":"2023","title":"Community Competition and Political Extremism","authors":["C Henry"],"snippet":"… Second, the LLaMa 1 foundational models are trained on publicly available data sources including the CommonCrawl. The … the 30 seed users used to build the community dataset is present in the CommonCrawl corpus. Accuracy, precision, and …","url":["https://henryhenryhenry.com/Henry_JMP_915.pdf"]}
|
| 449 |
{"year":"2023","title":"Company Similarity using Large Language Models","authors":["D Vamvourellis, M Toth, S Bhagat, D Desai, D Mehta… - arXiv preprint arXiv …, 2023"],"snippet":"… It has been trained on multiple data sources like Common crawl dataset (around 600 billion words of text), GitHub dataset (100 million code repository), Stack overflow dataset (170 million questions and answers) on the task of next word …","url":["https://arxiv.org/pdf/2308.08031"]}
|
| 450 |
{"year":"2023","title":"Comparative Analysis of Balanced Code Smell Detection Using Machine Learning Check for updates","authors":["M Sabharwal, A Gupta, R Gandhi, I Khan - … : Proceedings of the International Conference on …"],"snippet":"… Any website which enables API calls to it can be used to collect data or for a more comprehensive analysis, Common Crawl by AWS can be used [5]. The scraped data will be extracted using a local script ran using the python requests library from the …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=033lEAAAQBAJ&oi=fnd&pg=PA371&dq=commoncrawl&ots=06-pLbuKv9&sig=8qoh4Hy7QynXXofSNrnXdYyDkrE"]}
|
| 451 |
-
{"year":"2023","title":"Comparative Analysis of Machine Learning and Deep Learning Models for Sentiment Analysis in Somali","authors":["AA Abdirahman, AO Hashi, MA Elmi, OER Rodriguez"],"snippet":"Understanding and analysing sentiment in user-generated content has become crucial with the increasing use of social media and online platforms. However, sentiment analysis in less-resourced languages like Somali poses unique …","url":["https://simad.edu.so/wp-content/uploads/2023/08/IJEEE-V10I7P104.pdf"]}
|
| 452 |
{"year":"2023","title":"Comparing different search methods for the open access journal recommendation tool B! SON","authors":["E Entrup, A Eppelin, R Ewerth, J Hartwig, M Tullney… - International Journal on …, 2023"],"snippet":"Finding a suitable open access journal to publish academic work is a complex task: Researchers have to navigate a constantly growing number of journals, institutional agreements with publishers, funders’ conditions and the risk of predatory publishers …","url":["https://link.springer.com/article/10.1007/s00799-023-00372-3"]}
|
| 453 |
{"year":"2023","title":"Comparing Different Transformer Models' Performance for Identifying Toxic Language Online","authors":["C Sundelin - 2023"],"snippet":"There is a growing use of the internet and alongside that, there has been an increase in the use of toxic language towards other people that can be harmful to those that it targets. The usefulness of artificial intelligence has exploded in recent …","url":["https://www.diva-portal.org/smash/get/diva2:1784346/FULLTEXT01.pdf"]}
|
| 454 |
{"year":"2023","title":"Comparing the Similarity of OpenAPI-Based Microservices","authors":["Z Lu, DT Delaney, D Lillis - 2024"],"snippet":"Microservices constitute the state of the art for implementing distributed systems and have been seen as a potential solution towards open systems. The characteristics of open systems require structured microservice management, including grouping …","url":["https://lill.is/pubs/Lu2024.pdf"]}
|
|
@@ -646,7 +646,7 @@
|
|
| 646 |
{"year":"2023","title":"Energy Estimates Across Layers of Computing: From Devices to Large-Scale Applications in Machine Learning for Natural Language Processing, Scientific …","authors":["S Shankar - arXiv preprint arXiv:2310.07516, 2023"],"snippet":"… These AI/ML methods depend on training on a large corpus, namely significant amounts of data using words, phrases, part-of speech requirements, existing collections of text from academic journals, books, social network websites, Wikipedia …","url":["https://arxiv.org/pdf/2310.07516"]}
|
| 647 |
{"year":"2023","title":"Engineering a Distributed-Memory Triangle Counting Algorithm","authors":["P Sanders, TN Uhl - arXiv preprint arXiv:2302.11443, 2023"],"snippet":"Counting triangles in a graph and incident to each vertex is a fundamental and frequently considered task of graph analysis. We consider how to efficiently do this for huge graphs using massively parallel distributed-memory machines …","url":["https://arxiv.org/pdf/2302.11443"]}
|
| 648 |
{"year":"2023","title":"Engineering the Best In-Context Input for GPT-3 in the OpenQA Task","authors":["K Huang, G Sullan, O Ebhomielen"],"snippet":"GPT-3, since its release, has garnered the attention of the NLP community due to its versatility across a wide range of NLP tasks. In this work, we use GPT-3 to approach the OpenQA task, where the model needs to answer input questions without being …","url":["https://kailihuang.com/assets/pdf/cs224u.pdf"]}
|
| 649 |
-
{"year":"2023","title":"Enhanced Emotion and Sentiment Recognition for Empathetic Dialogue System Using Big Data and Deep Learning Methods","authors":["M Kozłowski, K Gabor-Siatkowska, I Stefaniak… - International Conference on …, 2023","M Sowański, A Janicki"],"snippet":"… The process of using the Common Crawl web archive to create an enlarged corpus, named CORTEX+
|
| 650 |
{"year":"2023","title":"Enhanced Phishing URL Detection Using Leveraging BERT with Additional URL Feature Extraction","authors":["KS Jishnu, B Arthi - 2023 5th International Conference on Inventive …, 2023"],"snippet":"… Their heuristic-based deep learning technique made use of RNN models and datasets including PhishTank, Alexa, and Common Crawl. … Their research used the PhishTank and Common Crawl databases, which contain legal and phishing …","url":["https://ieeexplore.ieee.org/abstract/document/10220647/"]}
|
| 651 |
{"year":"2023","title":"Enhancing Customer Support with Knowledge Graph-Based Question Answering","authors":["N Stampe - 2023"],"snippet":"Many companies don’t utilize the huge amount of unstructured data they possess. Old issue tickets are one example. A company that possesses a lot of old issue tickets are Stibo Systems. Meanwhile, customer support staff receive issues that has …","url":["https://www.stiboaccelerator.com/s/Master_Thesis_Niels_Stampe_201708197.pdf"]}
|
| 652 |
{"year":"2023","title":"Enhancing EFL reading and writing through AI-powered tools: design, implementation, and evaluation of an online course","authors":["JC Hsiao, JS Chang - Interactive Learning Environments, 2023"],"snippet":"During the Covid-19 pandemic, global teachers gained extensive experiences with teaching online courses. To design quality online courses in the post-pandemic era, the impact of the latest technology, such as artificial intelligence (AI), must be …","url":["https://www.tandfonline.com/doi/abs/10.1080/10494820.2023.2207187"]}
|
|
@@ -741,7 +741,7 @@
|
|
| 741 |
{"year":"2023","title":"Fake News Detection via Deep Learning Approaches","authors":["M Li - 2023 4th International Symposium on Computer …, 2023"],"snippet":"… RealNews: RealNews is a corpus of news articles whose data is taken from Common Crawl. The body and metadata in each news article is extracted by the Newspaper Python library. The training data uses news data from December 2016 to March 2019. …","url":["https://ieeexplore.ieee.org/abstract/document/10271110/"]}
|
| 742 |
{"year":"2023","title":"Fake news detection: Taxonomy and comparative study","authors":["F Farhangian, RMO Cruz, GDC Cavalcanti - Information Fusion, 2023"],"snippet":"The proliferation of social networks has presented a significant challenge in combating the pervasive issue of fake news within modern societies. Due to the large amount of information and news produced daily in text, audio, and video, the …","url":["https://www.sciencedirect.com/science/article/pii/S1566253523004566"]}
|
| 743 |
{"year":"2023","title":"Faking It: Artificial Intelligence in a Human World","authors":["T Walsh - 2023"]}
|
| 744 |
-
{"year":"2023","title":"Fast and Energy-Efficient Inference for Attention-Based Natural Language Processing Models","authors":["A Hadi Zadeh - 2023","AH Zadeh - 2023"],"snippet":"Creating machines that can
|
| 745 |
{"year":"2023","title":"Fast-DetectGPT: Efficient Zero-Shot Detection of Machine-Generated Text via Conditional Probability Curvature","authors":["G Bao, Y Zhao, Z Teng, L Yang, Y Zhang - arXiv preprint arXiv:2310.05130, 2023"],"snippet":"Large language models (LLMs) have shown the ability to produce fluent and cogent content, presenting both productivity opportunities and societal risks. To build trustworthy AI systems, it is imperative to distinguish between machine-generated …","url":["https://arxiv.org/pdf/2310.05130"]}
|
| 746 |
{"year":"2023","title":"Feature Learning in Infinite-Depth Neural Networks","authors":["G Yang, D Yu, C Zhu, S Hayou - NeurIPS 2023 Workshop on Mathematics of Modern …, 2023"],"snippet":"… block is deeper (such as modern transformers), then we find fundamental limitations in all possible infinite-depth limits of such parametrizations, which we illustrate both theoretically and empirically on simple networks as well as Megatron …","url":["https://openreview.net/forum?id=xxYfmRTwyX"]}
|
| 747 |
{"year":"2023","title":"Feature-Level Ensemble Learning for Robust Synthetic Text Detection with DeBERTaV3 and XLM-RoBERTa","authors":["SS Joy, TD Aishi - Proceedings of ALTA, 2023"],"snippet":"As large language models, or LLMs, continue to advance in recent years, they require the development of a potent system to detect whether a text was created by a human or an LLM in order to prevent the unethical use of LLMs. To address this …","url":["https://alta2023.alta.asn.au/files/st_04.pdf"]}
|
|
@@ -866,7 +866,7 @@
|
|
| 866 |
{"year":"2023","title":"How Prevalent is Gender Bias in ChatGPT?--Exploring German and English ChatGPT Responses","authors":["S Urchs, V Thurner, M Aßenmacher, C Heumann… - arXiv preprint arXiv …, 2023"],"snippet":"… It is unclear from the documentation on which data the system was trained exactly, but since it includes training data from CommonCrawl4 it is likely to reflect many of the biases and stereotypes common to internet content. Furthermore, the model is …","url":["https://arxiv.org/pdf/2310.03031"]}
|
| 867 |
{"year":"2023","title":"How to deploy security mechanisms online (consistently)","authors":["S Roth - 2023"],"snippet":"To mitigate a myriad of Web attacks, modern browsers support client-side security policies shipped through HTTP response headers. To enforce these policies, the operator can set response headers that the server then communicates to the client …","url":["https://publikationen.sulb.uni-saarland.de/bitstream/20.500.11880/35991/1/thesis.pdf"]}
|
| 868 |
{"year":"2023","title":"How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases","authors":["A Mueller, T Linzen - arXiv preprint arXiv:2305.19905, 2023"],"snippet":"… All of these models are trained on approximately 34B words from the Colossal Cleaned Common Crawl (C4) web text corpus. … , which we included in the previous experiment, we also pre-train models on the Colossal Cleaned Common …","url":["https://arxiv.org/pdf/2305.19905"]}
|
| 869 |
-
{"year":"2023","title":"How
|
| 870 |
{"year":"2023","title":"How well do language models understand grammar?: a case study on Japanese","authors":["GC Breul - 2022"],"snippet":"Modern attention-based language models such as BERT and GPT have been shown to outperform previous state-of-the-art models on many NLP tasks. This performance implies a level of understanding of grammatical structures. This work …","url":["http://elib.uni-stuttgart.de/bitstream/11682/12803/1/Masterarbeit%20Gerhard%20Breul.pdf"]}
|
| 871 |
{"year":"2023","title":"HPLT: High Performance Language Technologies","authors":["M Aulamo, N Bogoychev, S Ji, G Nail… - Proceedings of the 24th …, 2023"],"snippet":"We describe the High Performance Language Technologies project (HPLT), a 3-year EU-funded project started in September 2022. HPLT will build a space combining petabytes of natural language data with large-scale model training. It will derive …","url":["https://aclanthology.org/2023.eamt-1.61.pdf"]}
|
| 872 |
{"year":"2023","title":"HTTP header based phishing attack detection using machine learning","authors":["S Shukla, M Misra, G Varshney - Transactions on Emerging Telecommunications …"],"snippet":"In the past, many techniques like blacklisting/whitelisting, third‐party, search engine, visual similarity, heuristic, URL features, and website content were used for anti‐phishing. Search engine‐based, third‐party assisted tools and blacklist/whitelist fail to identify …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/ett.4872"]}
|
|
@@ -1004,7 +1004,7 @@
|
|
| 1004 |
{"year":"2023","title":"Large Language Models Can Be Used to Estimate the Latent Positions of Politicians","authors":["PY Wu, J Nagler, JA Tucker, S Messing"],"snippet":"Existing approaches to estimating politicians’ latent positions along specific dimensions often fail when relevant data is limited. We leverage the embedded knowledge in generative large language models (LLMs) to address this challenge …","url":["https://www.patrickywu.com/PatrickYWu_JMP1_LaMPscores.pdf"]}
|
| 1005 |
{"year":"2023","title":"Large language models in medicine","authors":["AJ Thirunavukarasu, DSJ Ting, K Elangovan… - Nature Medicine, 2023"],"snippet":"Large language models (LLMs) can respond to free-text queries without being specifically trained in the task in question, causing excitement and concern about their use in healthcare settings. ChatGPT is a generative artificial intelligence (AI) …","url":["https://www.nature.com/articles/s41591-023-02448-8"]}
|
| 1006 |
{"year":"2023","title":"Large Language Models Need Symbolic AI","authors":["K Hammond, D Leake - 2023"],"snippet":"… GPT-3 was trained on an extensive dataset, based on a version of the CommonCrawl dataset (with almost a trillion words) and additional reference sources. Given tasks and few-shot demonstrations provided to the system as text …","url":["https://ceur-ws.org/Vol-3432/paper17.pdf"]}
|
| 1007 |
-
{"year":"2023","title":"Large Language Models","authors":["M McTear, M Ashurkina - Transforming Conversational AI: Exploring the Power …, 2024","MR Douglas - arXiv preprint arXiv:2307.05782, 2023"],"snippet":"
|
| 1008 |
{"year":"2023","title":"Large Language Models' Understanding of Math: Source Criticism and Extrapolation","authors":["R Yousefzadeh, X Cao - arXiv preprint arXiv:2311.07618, 2023"],"snippet":"… Common Crawl is particularly interesting. The GPT-f model developed for mathematical learning was trained on 300 billion tokens from CommonCrawl… The size of the most recent CommonCrawl is 390 TiB including the contents of 3.1 billion …","url":["https://arxiv.org/pdf/2311.07618"]}
|
| 1009 |
{"year":"2023","title":"Large Language Models, scientific knowledge and factuality: A systematic analysis in antibiotic discovery","authors":["M Wysocka, O Wysocki, M Delmas, V Mutel, A Freitas - arXiv preprint arXiv …, 2023"],"snippet":"Inferring over and extracting information from Large Language Models (LLMs) trained on a large corpus of scientific literature can potentially drive a new era in biomedical research, reducing the barriers for accessing existing medical evidence …","url":["https://arxiv.org/pdf/2305.17819"]}
|
| 1010 |
{"year":"2023","title":"Large Scale Fine-Tuned Transformers Models Application for Business Names Generation","authors":["M Lukauskas, T Rasymas, M Minelga, D Vaitmonas - Computing and Informatics, 2023"],"snippet":"… on larger datasets, leading to pre-trained systems such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), which have been trained on large language datasets such as the …","url":["https://www.cai.sk/ojs/index.php/cai/article/download/2023_3_525/1228"]}
|
|
@@ -1457,7 +1457,7 @@
|
|
| 1457 |
{"year":"2023","title":"Subject-verb Agreement with Seq2Seq Transformers: Bigger Is Better, but Still Not Best","authors":["MA Wilson, Z Zhou, R Frank - Proceedings of the Society for Computation in …, 2023"],"snippet":"Past work (Linzen et al., 2016; Goldberg, 2019, ao) has used the performance of neural network language models on subject-verb agreement to argue that such models possess structure-sensitive grammatical knowledge. We investigate what …","url":["https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1303&context=scil"]}
|
| 1458 |
{"year":"2023","title":"Submission of USTC's system for the IWSLT 2023-Offline Speech Translation Track","authors":["X Zhou, J Cui, Z Ye, Y Wang, L Xu, H Zhang, W Zhang… - Proceedings of the 20th …, 2023"],"snippet":"This paper describes the submissions of the research group USTC-NELSLIP to the 2023 IWSLT Offline Speech Translation competition, which involves translating spoken English into written Chinese. We utilize both cascaded models and end-to-end …","url":["https://aclanthology.org/2023.iwslt-1.15.pdf"]}
|
| 1459 |
{"year":"2023","title":"Subversion of the Human Aura: A Crisis in Representation","authors":["NK Hayles - American Literature, 2023"],"snippet":"The human aura is now being subverted by a variety of simulacra. OpenAI’s language-generation program GPT-3 illustrates the challenges of interpreting algorithmic-generated texts. This article advocates interpretive strategies that …","url":["https://read.dukeupress.edu/american-literature/article-abstract/doi/10.1215/00029831-10575063/344236"]}
|
| 1460 |
-
{"year":"2023","title":"Subword-
|
| 1461 |
{"year":"2023","title":"Suicidal Text Detection in Social Media","authors":["MP Karthikeyan, I Ajay, R Magesh, G Saran - 2023"],"snippet":"This system is developed with the aim of providing and insight information of people who are personally disturbed by the factors of either their personal life or family background or bully at school or work pressure. With the help of people’s online …","url":["https://www.ijrar.org/papers/IJRAR23B1004.pdf"]}
|
| 1462 |
{"year":"2023","title":"Suicide risk assessment using word-level model with dictionary-based risky posts selection","authors":["YS Tsai, ALP Chen - Multimedia Tools and Applications, 2023"],"snippet":"Suicide is a serious issue around the world and is a leading cause of death in US. In the past 20 years, the suicide rate has seen a significant increase of 35%. With the rapid development of information technology, more and more people begin to use …","url":["https://link.springer.com/article/10.1007/s11042-023-16361-2"]}
|
| 1463 |
{"year":"2023","title":"SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation","authors":["J Jiang, C Dong, A Aizawa, S Kurohashi - arXiv preprint arXiv:2305.08371, 2023"],"snippet":"… For TextTiling+Glove, we used the version pretrained with 42 billion tokens of web data from Common Crawl.For GreedySeg and CSM, we corrected some inconsistencies in their open-sourced codes with respect to their original published …","url":["https://arxiv.org/pdf/2305.08371"]}
|
|
@@ -1705,7 +1705,7 @@
|
|
| 1705 |
{"year":"2023","title":"Vision-Language Models for Vision Tasks: A Survey","authors":["J Zhang, J Huang, S Jin, S Lu - arXiv preprint arXiv:2304.00685, 2023"],"snippet":"Most visual recognition studies rely heavily on crowd-labelled data in deep neural networks (DNNs) training, and they usually train a DNN for each single visual recognition task, leading to a laborious and time-consuming visual recognition …","url":["https://arxiv.org/pdf/2304.00685"]}
|
| 1706 |
{"year":"2023","title":"ViSoBERT: A Pre-Trained Language Model for Vietnamese Social Media Text Processing","authors":["QN Nguyen, TC Phan, DV Nguyen, K Van Nguyen - arXiv preprint arXiv:2310.11166, 2023"],"snippet":"English and Chinese, known as resource-rich languages, have witnessed the strong development of transformer-based language models for natural language processing tasks. Although Vietnam has approximately 100M people speaking …","url":["https://arxiv.org/pdf/2310.11166"]}
|
| 1707 |
{"year":"2023","title":"Visual experience modulates the sensitivity to the distributional history of words in natural language","authors":["G Anceresi, D Gatti, M Marelli, T Vecchi, L Rinaldi - 2023"],"snippet":"Different experiential traces (ie, linguistic, motor and perceptual) are likely contributing to the organization of human semantic knowledge. Here, we aimed to address this issue by investigating whether visual experience may affect the …","url":["https://psyarxiv.com/jqa9k/download?format=pdf"]}
|
| 1708 |
-
{"year":"2023","title":"Visual Question Answering: A Survey on Techniques and Common Trends in Recent Literature","authors":["ACAM de Faria, FC Bastos, JVNA da Silva, VL Fabris… - arXiv preprint arXiv …, 2023","CFG dos Sants, F de Castro Bastos, ACAM de Faria… - 2023"],"snippet":"… More technically, this new architecture has a language model based on Text-
|
| 1709 |
{"year":"2023","title":"Visual-Semantic Learning","authors":["C Yin - 2023"],"snippet":"… of 15 words, while the questions with length smaller than 15 were padded with zeros to the length of 15 (10 for the MSVD-QA dataset), and each word in the questions was represented as a 300D vectors using the GloVe word embedding [214] …","url":["https://search.proquest.com/openview/f5cf7cabc3e1cbcb0a2fece160ce1319/1?pq-origsite=gscholar&cbl=18750&diss=y"]}
|
| 1710 |
{"year":"2023","title":"Visualisation and Classification of Phishing URL using Ensemble Learning Algorithms and Hyper-Parameter Tuning","authors":["G Agarwal, C Goel, K Jindal, T Subbulakshmi - 2023 Third International Conference …, 2023"],"snippet":"… Alexa and Common Crawl were used to gather legitimate URLs. Lexical features, host-based features, and correlated feature groups are the three categories used to classify the features. Lexical features are textual aspects of the URL rather than the …","url":["https://ieeexplore.ieee.org/abstract/document/10176642/"]}
|
| 1711 |
{"year":"2023","title":"Vocabulary-free Image Classification","authors":["A Conti, E Fini, M Mancini, P Rota, Y Wang, E Ricci - arXiv preprint arXiv:2306.00917, 2023"],"snippet":"Recent advances in large vision-language models have revolutionized the image classification paradigm. Despite showing impressive zero-shot capabilities, a pre-defined set of categories, aka the vocabulary, is assumed at test time for composing the …","url":["https://arxiv.org/pdf/2306.00917"]}
|
|
|
|
| 21 |
{"year":"2023","title":"A BERT-Based Model for Financial Social Media Sentiment Analysis","authors":["J Delgadillo, J Kinyua, C Mutigwe - International Journal of Cognitive and Language …, 2023"],"snippet":"The purpose of sentiment analysis is to determine the sentiment strength (eg, positive, negative, neutral) from a textual source for good decision-making. Natural Language Processing (NLP) in domains such as financial markets requires …","url":["https://publications.waset.org/10012944/a-bert-based-model-for-financial-social-media-sentiment-analysis"]}
|
| 22 |
{"year":"2023","title":"A Brief Overview of ChatGPT: The History, Status Quo and Potential Future Development","authors":["T Wu, S He, J Liu, S Sun, K Liu, QL Han, Y Tang - IEEE/CAA Journal of Automatica …, 2023"],"snippet":"ChatGPT, an artificial intelligence generated content (AIGC) model developed by OpenAI, has attracted world-wide attention for its capability of dealing with challenging language understanding and generation tasks in the form of …","url":["https://www.ieee-jas.net/en/article/doi/10.1109/JAS.2023.123618"]}
|
| 23 |
{"year":"2023","title":"A Case Study Analysis of Google Smart Compose and Its Effects on the Student Writing Process From the Student and Teacher Perspectives","authors":["H Bryant - 2023"],"snippet":"This qualitative case study addresses the rise in popularity of the use of predictive text programs within the K-12 educational environment. A problem exists in the discrepancy between the widespread availability of Google Smart Compose within …","url":["https://search.proquest.com/openview/f62cdbf5839f7b94d7eaf8f39e8cdf1a/1?pq-origsite=gscholar&cbl=18750&diss=y"]}
|
| 24 |
+
{"year":"2023","title":"A Cohesive Distillation Architecture for Neural Language Models","authors":["JP Wahle - arXiv preprint arXiv:2301.08130, 2023","LR Terry - 2023"],"snippet":"A recent trend in Natural Language Processing is the exponential growth in Language Model (LM) size, which prevents research groups without a necessary hardware infrastructure from taking part in the development process. This study …","url":["https://arxiv.org/pdf/2301.08130","https://www.authorea.com/doi/pdf/10.22541/au.167528147.79728645"]}
|
| 25 |
{"year":"2023","title":"A Comparative Performance Evaluation of Algorithms for the Analysis and Recognition of Emotional Content","authors":["K Kyritsis, N Spatiotis, I Perikos, M Paraskevas - 2023"],"snippet":"Sentiment Analysis is highly valuable in Natural Language Processing (NLP) across domains, processing and evaluating sentiment in text for emotional understanding. This technology has diverse applications, including social media monitoring, brand …","url":["https://www.intechopen.com/online-first/87923"]}
|
| 26 |
{"year":"2023","title":"A Comparative Study of Code Generation using ChatGPT 3.5 across 10 Programming Languages","authors":["A Buscemi - arXiv preprint arXiv:2308.04477, 2023"],"snippet":"… OpenAI employed a dataset known as the Common Crawl [24], a publicly accessible collection of billions of web pages, making it as one of the most extended text databases currently accessible. It is to be noted that the selection of the dataset …","url":["https://arxiv.org/pdf/2308.04477"]}
|
| 27 |
{"year":"2023","title":"A Comparative Study of Pre-trained Language Models to Filter Informative Code-mixed Data on Social Media during Disasters","authors":["H Salemi, Y Senarath, H Purohit"],"snippet":"… A multi-lingual language model trained using masked language modeling on 2.5 TB of newly created and cleaned CommonCrawl data. … XLM-R: A multilingual version of RoBERTa that is pre-trained on 2.5TB of filtered CommonCrawl data …","url":["https://idl.iscram.org/files/salemi/2023/2576_Salemi_etal2023.pdf"]}
|
|
|
|
| 198 |
{"year":"2023","title":"An Investigation of Representation and Allocation Harms in Contrastive Learning","authors":["S Maity, M Agarwal, M Yurochkin, Y Sun - arXiv preprint arXiv:2310.01583, 2023"],"snippet":"… In this experiment, we study the potential harms of CL applied to data obtained from Common Crawl, a popular source of text data for self-… 2019) which consists of around 400k online biographies in English extracted from the Common Crawl data …","url":["https://arxiv.org/pdf/2310.01583"]}
|
| 199 |
{"year":"2023","title":"An Open Source Data Contamination Report for Llama Series Models","authors":["Y Li - arXiv preprint arXiv:2310.17589, 2023"],"snippet":"… Our approach utilises a search engine and the Common Crawl index, avoiding the need to host the full 2017-2020 Common Crawl dumps locally. This massive training data would incur prohibitive computational requirements. However, relying …","url":["https://arxiv.org/pdf/2310.17589"]}
|
| 200 |
{"year":"2023","title":"An Urgency for Inclusivity: Redesigning Datasets for Improved Representation of LGBTQ+ Identity Terms in Artificial Intelligence (AI)","authors":["L Wang - 2023"],"snippet":"… The Common Crawl Dataset stands as a vital resource in the realm of AI model training, … However, despite the Common Crawl's prominence, handling LGBTQ+ identity terms in AI … ’s C4 dataset, a filtered version of the Common Crawl.The …","url":["https://laniwang.com/LaniWangFinalPaper.CORRECTFORMATTING.pdf"]}
|
| 201 |
+
{"year":"2023","title":"Analogical Proportions and Creativity: A Preliminary Study","authors":["S Afantenos, H Prade, LC Bernardes - arXiv preprint arXiv:2310.13500, 2023","SAHPG Richard, LC Bernardes"],"snippet":"Analogical proportions are statements of the form \"$a$ is to $b$ as $c$ is to $d$\", which expresses that the comparisons of the elements in pair $(a, b)$ and in pair $(c, d)$ yield similar results. Analogical proportions are creative in the sense that given 3 …","url":["https://arxiv.org/pdf/2310.13500","https://computationalcreativity.net/iccc24/full-papers/ICCC24_paper_32.pdf"]}
|
| 202 |
{"year":"2023","title":"ANALOGICAL--A New Benchmark for Analogy of Long Text for Large Language Models","authors":["T Wijesiriwardene, R Wickramarachchi, BG Gajera… - arXiv preprint arXiv …, 2023"],"snippet":"… In addition three other corpora containing news articles, web content, and a filtered subset of the CommonCrawl corpus were used. The training approach of RoBERTa differs from BERT as follows. RoBERTa modifies the MLM task by moving …","url":["https://arxiv.org/pdf/2305.05050"]}
|
| 203 |
{"year":"2023","title":"Analysing Cross-Lingual Transfer in Low-Resourced African Named Entity Recognition","authors":["M Beukman, M Fokam - arXiv preprint arXiv:2309.05311, 2023"],"snippet":"Transfer learning has led to large gains in performance for nearly all NLP tasks while making downstream models easier and faster to train. This has also been extended to low-resourced languages, with some success. We investigate the …","url":["https://arxiv.org/pdf/2309.05311"]}
|
| 204 |
{"year":"2023","title":"ANALYSIS OF A DECISION SUPPORT SYSTEM USING AHP FOR FOOD AND RESTAURANT SELECTION BASED ON THE USER'S FOOD CRAVINGS AND …","authors":["D Dyondra, J Purnama, M Galinium - SGU Online Thesis Submission, 2023"],"snippet":"The purpose of this research is to develop a decision support system (DSS) using the AHP algorithm for selecting restaurants based on the user's food cravings and location in Jakarta. The data for the DSS was gathered by scraping restaurant data …","url":["https://thesis.sgu.ac.id/index.php/ots/article/download/4498/832"]}
|
|
|
|
| 369 |
{"year":"2023","title":"CamPros at CASE 2022 Task 1: Transformer-based Multilingual Protest News Detection","authors":["N Kumari, M Anand, T Mohan, P Kumaraguru… - Proceedings of the 5th …, 2022"],"snippet":"Socio-political protests often lead to grave consequences when they occur. The early detection of such protests is very important for taking early precautionary measures. However, the main shortcoming of protest event detection is the scarcity …","url":["https://aclanthology.org/2022.case-1.24.pdf"]}
|
| 370 |
{"year":"2023","title":"Can ChatGPT Replace Traditional KBQA Models? An In-Depth Analysis of the Question Answering Performance of the GPT LLM Family","authors":["Y Tan, D Min, Y Li, W Li, N Hu, Y Chen, G Qi - International Semantic Web …, 2023"],"snippet":"ChatGPT is a powerful large language model (LLM) that covers knowledge resources such as Wikipedia and supports natural language question answering using its own knowledge. Therefore, there is growing interest in exploring whether …","url":["https://link.springer.com/chapter/10.1007/978-3-031-47240-4_19"]}
|
| 371 |
{"year":"2023","title":"Can Large Language Models Capture Dissenting Human Voices?","authors":["N Lee, N An, J Thorne - Proceedings of the 2023 Conference on Empirical …, 2023"],"snippet":"Large language models (LLMs) have shown impressive achievements in solving a broad range of tasks. Augmented by instruction fine-tuning, LLMs have also been shown to generalize in zero-shot settings as well. However, whether LLMs closely …","url":["https://aclanthology.org/2023.emnlp-main.278.pdf"]}
|
| 372 |
+
{"year":"2023","title":"Can Large Language Models Generate Outpatient Clinic Letters at First Consultation That Incorporate Complication Profiles From UK and USA Aesthetic Plastic …","authors":["RHR Roberts, SR Ali, TD Dobbs, IS Whitaker - Aesthetic Surgery Journal Open …, 2023","TD Dobbs, IS Whitaker, MA Cantab - Aesthetic Surgery Journal, 2024"],"snippet":"The importance of written communication between clinicians and patients, especially in the wake of the Supreme Court case of Montgomery vs Lanarkshire, has led to a shift toward patient-centric care in the United Kingdom. This study …","url":["https://academic.oup.com/asjopenforum/advance-article/doi/10.1093/asjof/ojad109/7459516","https://www.researchgate.net/profile/Rich-Roberts-2/publication/376374844_Can_Large_Language_Models_Generate_Outpatient_Clinic_Letters_at_First_Consultation_That_Incorporate_Complication_Profiles_From_UK_and_USA_Aesthetic_Plastic_Surgery_Associations/links/659c25f96f6e450f19d775da/Can-Large-Language-Models-Generate-Outpatient-Clinic-Letters-at-First-Consultation-That-Incorporate-Complication-Profiles-From-UK-and-USA-Aesthetic-Plastic-Surgery-Associations.pdf"]}
|
| 373 |
{"year":"2023","title":"Can Peanuts Fall in Love with Distributional Semantics?","authors":["JA Michaelov, S Coulson, BK Bergen - arXiv preprint arXiv:2301.08731, 2023"],"snippet":"The context in which a sentence appears can drastically alter our expectations about upcoming words - for example, following a short story involving an anthropomorphic peanut, experimental participants are more likely to expect the sentence 'the peanut …","url":["https://arxiv.org/pdf/2301.08731"]}
|
| 374 |
{"year":"2023","title":"Can we Debunk Disinformation by Leveraging SpeakerCredibility and Perplexity Measures?","authors":["AFUR Khilji, A Sachan, D Lachi, AV Singh, TD Singh - 2023"],"snippet":"In the present age, őghting disinformation is the main concern after pandemic. The exponential growth of fake news and its role in deteriorating general public trust and democratic standards certainly calls for counter-combat approaches. The prediction …","url":["https://www.researchsquare.com/article/rs-2764182/latest.pdf"]}
|
| 375 |
{"year":"2023","title":"cantnlp@ LT-EDI-2023: Homophobia/Transphobia Detection in Social Media Comments using Spatio-Temporally Retrained Language Models","authors":["S Wong, M Durward, B Adams, J Dunn - Proceedings of the Third Workshop on …, 2023"],"snippet":"This paper describes our multiclass classification system developed as part of the LT-EDI@ RANLP-2023 shared task. We used a BERT-based language model to detect homophobic and transphobic content in social media comments across five …","url":["https://aclanthology.org/2023.ltedi-1.15.pdf"]}
|
|
|
|
| 448 |
{"year":"2023","title":"Community Competition and Political Extremism","authors":["C Henry"],"snippet":"… Second, the LLaMa 1 foundational models are trained on publicly available data sources including the CommonCrawl. The … the 30 seed users used to build the community dataset is present in the CommonCrawl corpus. Accuracy, precision, and …","url":["https://henryhenryhenry.com/Henry_JMP_915.pdf"]}
|
| 449 |
{"year":"2023","title":"Company Similarity using Large Language Models","authors":["D Vamvourellis, M Toth, S Bhagat, D Desai, D Mehta… - arXiv preprint arXiv …, 2023"],"snippet":"… It has been trained on multiple data sources like Common crawl dataset (around 600 billion words of text), GitHub dataset (100 million code repository), Stack overflow dataset (170 million questions and answers) on the task of next word …","url":["https://arxiv.org/pdf/2308.08031"]}
|
| 450 |
{"year":"2023","title":"Comparative Analysis of Balanced Code Smell Detection Using Machine Learning Check for updates","authors":["M Sabharwal, A Gupta, R Gandhi, I Khan - … : Proceedings of the International Conference on …"],"snippet":"… Any website which enables API calls to it can be used to collect data or for a more comprehensive analysis, Common Crawl by AWS can be used [5]. The scraped data will be extracted using a local script ran using the python requests library from the …","url":["https://books.google.de/books?hl=en&lr=lang_en&id=033lEAAAQBAJ&oi=fnd&pg=PA371&dq=commoncrawl&ots=06-pLbuKv9&sig=8qoh4Hy7QynXXofSNrnXdYyDkrE"]}
|
| 451 |
+
{"year":"2023","title":"Comparative Analysis of Machine Learning and Deep Learning Models for Sentiment Analysis in Somali","authors":["AA Abdirahman, AO Hashi, MA Elmi, OER Rodriguez"],"snippet":"Understanding and analysing sentiment in user-generated content has become crucial with the increasing use of social media and online platforms. However, sentiment analysis in less-resourced languages like Somali poses unique …","url":["https://simad.edu.so/wp-content/uploads/2023/08/IJEEE-V10I7P104.pdf","https://www.researchgate.net/profile/Abdirahman-Hashi/publication/372976119_Comparative_Analysis_of_Machine_Learning_and_Deep_Learning_Models_for_Sentiment_Analysis_in_Somali_Language/links/667fd91df3b61c4e2c9992ce/Comparative-Analysis-of-Machine-Learning-and-Deep-Learning-Models-for-Sentiment-Analysis-in-Somali-Language.pdf"]}
|
| 452 |
{"year":"2023","title":"Comparing different search methods for the open access journal recommendation tool B! SON","authors":["E Entrup, A Eppelin, R Ewerth, J Hartwig, M Tullney… - International Journal on …, 2023"],"snippet":"Finding a suitable open access journal to publish academic work is a complex task: Researchers have to navigate a constantly growing number of journals, institutional agreements with publishers, funders’ conditions and the risk of predatory publishers …","url":["https://link.springer.com/article/10.1007/s00799-023-00372-3"]}
|
| 453 |
{"year":"2023","title":"Comparing Different Transformer Models' Performance for Identifying Toxic Language Online","authors":["C Sundelin - 2023"],"snippet":"There is a growing use of the internet and alongside that, there has been an increase in the use of toxic language towards other people that can be harmful to those that it targets. The usefulness of artificial intelligence has exploded in recent …","url":["https://www.diva-portal.org/smash/get/diva2:1784346/FULLTEXT01.pdf"]}
|
| 454 |
{"year":"2023","title":"Comparing the Similarity of OpenAPI-Based Microservices","authors":["Z Lu, DT Delaney, D Lillis - 2024"],"snippet":"Microservices constitute the state of the art for implementing distributed systems and have been seen as a potential solution towards open systems. The characteristics of open systems require structured microservice management, including grouping …","url":["https://lill.is/pubs/Lu2024.pdf"]}
|
|
|
|
| 646 |
{"year":"2023","title":"Energy Estimates Across Layers of Computing: From Devices to Large-Scale Applications in Machine Learning for Natural Language Processing, Scientific …","authors":["S Shankar - arXiv preprint arXiv:2310.07516, 2023"],"snippet":"… These AI/ML methods depend on training on a large corpus, namely significant amounts of data using words, phrases, part-of speech requirements, existing collections of text from academic journals, books, social network websites, Wikipedia …","url":["https://arxiv.org/pdf/2310.07516"]}
|
| 647 |
{"year":"2023","title":"Engineering a Distributed-Memory Triangle Counting Algorithm","authors":["P Sanders, TN Uhl - arXiv preprint arXiv:2302.11443, 2023"],"snippet":"Counting triangles in a graph and incident to each vertex is a fundamental and frequently considered task of graph analysis. We consider how to efficiently do this for huge graphs using massively parallel distributed-memory machines …","url":["https://arxiv.org/pdf/2302.11443"]}
|
| 648 |
{"year":"2023","title":"Engineering the Best In-Context Input for GPT-3 in the OpenQA Task","authors":["K Huang, G Sullan, O Ebhomielen"],"snippet":"GPT-3, since its release, has garnered the attention of the NLP community due to its versatility across a wide range of NLP tasks. In this work, we use GPT-3 to approach the OpenQA task, where the model needs to answer input questions without being …","url":["https://kailihuang.com/assets/pdf/cs224u.pdf"]}
|
| 649 |
+
{"year":"2023","title":"Enhanced Emotion and Sentiment Recognition for Empathetic Dialogue System Using Big Data and Deep Learning Methods","authors":["M Kozłowski, K Gabor-Siatkowska, I Stefaniak… - International Conference on …, 2023","M Sowański, A Janicki"],"snippet":"… The process of using the Common Crawl web archive to create an enlarged corpus, named CORTEX+pCC, is presented. An empathetic dialogue system named Terabot, incorporating the elaborated method, is also described. The system is …","url":["https://link.springer.com/chapter/10.1007/978-3-031-35995-8_33","https://www.iccs-meeting.org/archive/iccs2023/papers/140730475.pdf"]}
|
| 650 |
{"year":"2023","title":"Enhanced Phishing URL Detection Using Leveraging BERT with Additional URL Feature Extraction","authors":["KS Jishnu, B Arthi - 2023 5th International Conference on Inventive …, 2023"],"snippet":"… Their heuristic-based deep learning technique made use of RNN models and datasets including PhishTank, Alexa, and Common Crawl. … Their research used the PhishTank and Common Crawl databases, which contain legal and phishing …","url":["https://ieeexplore.ieee.org/abstract/document/10220647/"]}
|
| 651 |
{"year":"2023","title":"Enhancing Customer Support with Knowledge Graph-Based Question Answering","authors":["N Stampe - 2023"],"snippet":"Many companies don’t utilize the huge amount of unstructured data they possess. Old issue tickets are one example. A company that possesses a lot of old issue tickets are Stibo Systems. Meanwhile, customer support staff receive issues that has …","url":["https://www.stiboaccelerator.com/s/Master_Thesis_Niels_Stampe_201708197.pdf"]}
|
| 652 |
{"year":"2023","title":"Enhancing EFL reading and writing through AI-powered tools: design, implementation, and evaluation of an online course","authors":["JC Hsiao, JS Chang - Interactive Learning Environments, 2023"],"snippet":"During the Covid-19 pandemic, global teachers gained extensive experiences with teaching online courses. To design quality online courses in the post-pandemic era, the impact of the latest technology, such as artificial intelligence (AI), must be …","url":["https://www.tandfonline.com/doi/abs/10.1080/10494820.2023.2207187"]}
|
|
|
|
| 741 |
{"year":"2023","title":"Fake News Detection via Deep Learning Approaches","authors":["M Li - 2023 4th International Symposium on Computer …, 2023"],"snippet":"… RealNews: RealNews is a corpus of news articles whose data is taken from Common Crawl. The body and metadata in each news article is extracted by the Newspaper Python library. The training data uses news data from December 2016 to March 2019. …","url":["https://ieeexplore.ieee.org/abstract/document/10271110/"]}
|
| 742 |
{"year":"2023","title":"Fake news detection: Taxonomy and comparative study","authors":["F Farhangian, RMO Cruz, GDC Cavalcanti - Information Fusion, 2023"],"snippet":"The proliferation of social networks has presented a significant challenge in combating the pervasive issue of fake news within modern societies. Due to the large amount of information and news produced daily in text, audio, and video, the …","url":["https://www.sciencedirect.com/science/article/pii/S1566253523004566"]}
|
| 743 |
{"year":"2023","title":"Faking It: Artificial Intelligence in a Human World","authors":["T Walsh - 2023"]}
|
| 744 |
+
{"year":"2023","title":"Fast and Energy-Efficient Inference for Attention-Based Natural Language Processing Models","authors":["A Hadi Zadeh - 2023","AH Zadeh - 2023"],"snippet":"Creating machines that can``understand’’our language and``interact’’with us as we interact with each other has been a dream that motivated many and captured the imaginations of even more. Attention-Based Transformer models have demonstrated …","url":["https://search.proquest.com/openview/8ef0b6a759aff7cf22bf26f40affd1bd/1?pq-origsite=gscholar&cbl=18750&diss=y","https://tspace.library.utoronto.ca/bitstream/1807/128003/3/Hadi_Zadeh_Ali_202306_PhD_thesis.pdf"]}
|
| 745 |
{"year":"2023","title":"Fast-DetectGPT: Efficient Zero-Shot Detection of Machine-Generated Text via Conditional Probability Curvature","authors":["G Bao, Y Zhao, Z Teng, L Yang, Y Zhang - arXiv preprint arXiv:2310.05130, 2023"],"snippet":"Large language models (LLMs) have shown the ability to produce fluent and cogent content, presenting both productivity opportunities and societal risks. To build trustworthy AI systems, it is imperative to distinguish between machine-generated …","url":["https://arxiv.org/pdf/2310.05130"]}
|
| 746 |
{"year":"2023","title":"Feature Learning in Infinite-Depth Neural Networks","authors":["G Yang, D Yu, C Zhu, S Hayou - NeurIPS 2023 Workshop on Mathematics of Modern …, 2023"],"snippet":"… block is deeper (such as modern transformers), then we find fundamental limitations in all possible infinite-depth limits of such parametrizations, which we illustrate both theoretically and empirically on simple networks as well as Megatron …","url":["https://openreview.net/forum?id=xxYfmRTwyX"]}
|
| 747 |
{"year":"2023","title":"Feature-Level Ensemble Learning for Robust Synthetic Text Detection with DeBERTaV3 and XLM-RoBERTa","authors":["SS Joy, TD Aishi - Proceedings of ALTA, 2023"],"snippet":"As large language models, or LLMs, continue to advance in recent years, they require the development of a potent system to detect whether a text was created by a human or an LLM in order to prevent the unethical use of LLMs. To address this …","url":["https://alta2023.alta.asn.au/files/st_04.pdf"]}
|
|
|
|
| 866 |
{"year":"2023","title":"How Prevalent is Gender Bias in ChatGPT?--Exploring German and English ChatGPT Responses","authors":["S Urchs, V Thurner, M Aßenmacher, C Heumann… - arXiv preprint arXiv …, 2023"],"snippet":"… It is unclear from the documentation on which data the system was trained exactly, but since it includes training data from CommonCrawl4 it is likely to reflect many of the biases and stereotypes common to internet content. Furthermore, the model is …","url":["https://arxiv.org/pdf/2310.03031"]}
|
| 867 |
{"year":"2023","title":"How to deploy security mechanisms online (consistently)","authors":["S Roth - 2023"],"snippet":"To mitigate a myriad of Web attacks, modern browsers support client-side security policies shipped through HTTP response headers. To enforce these policies, the operator can set response headers that the server then communicates to the client …","url":["https://publikationen.sulb.uni-saarland.de/bitstream/20.500.11880/35991/1/thesis.pdf"]}
|
| 868 |
{"year":"2023","title":"How to Plant Trees in Language Models: Data and Architectural Effects on the Emergence of Syntactic Inductive Biases","authors":["A Mueller, T Linzen - arXiv preprint arXiv:2305.19905, 2023"],"snippet":"… All of these models are trained on approximately 34B words from the Colossal Cleaned Common Crawl (C4) web text corpus. … , which we included in the previous experiment, we also pre-train models on the Colossal Cleaned Common …","url":["https://arxiv.org/pdf/2305.19905"]}
|
| 869 |
+
{"year":"2023","title":"How user language affects conflict fatality estimates in ChatGPT","authors":["CV Steinert, D Kazenwadel - Journal of Peace Research, 2024","D Kazenwadel, CV Steinert - arXiv preprint arXiv:2308.00072, 2023"],"snippet":"OpenAI’s ChatGPT language model has gained popularity as a powerful tool for problem-solving and information retrieval. However, concerns arise about the reproduction of biases present in the language-specific training data. In this study …","url":["https://arxiv.org/pdf/2308.00072","https://journals.sagepub.com/doi/pdf/10.1177/00223433241279381"]}
|
| 870 |
{"year":"2023","title":"How well do language models understand grammar?: a case study on Japanese","authors":["GC Breul - 2022"],"snippet":"Modern attention-based language models such as BERT and GPT have been shown to outperform previous state-of-the-art models on many NLP tasks. This performance implies a level of understanding of grammatical structures. This work …","url":["http://elib.uni-stuttgart.de/bitstream/11682/12803/1/Masterarbeit%20Gerhard%20Breul.pdf"]}
|
| 871 |
{"year":"2023","title":"HPLT: High Performance Language Technologies","authors":["M Aulamo, N Bogoychev, S Ji, G Nail… - Proceedings of the 24th …, 2023"],"snippet":"We describe the High Performance Language Technologies project (HPLT), a 3-year EU-funded project started in September 2022. HPLT will build a space combining petabytes of natural language data with large-scale model training. It will derive …","url":["https://aclanthology.org/2023.eamt-1.61.pdf"]}
|
| 872 |
{"year":"2023","title":"HTTP header based phishing attack detection using machine learning","authors":["S Shukla, M Misra, G Varshney - Transactions on Emerging Telecommunications …"],"snippet":"In the past, many techniques like blacklisting/whitelisting, third‐party, search engine, visual similarity, heuristic, URL features, and website content were used for anti‐phishing. Search engine‐based, third‐party assisted tools and blacklist/whitelist fail to identify …","url":["https://onlinelibrary.wiley.com/doi/abs/10.1002/ett.4872"]}
|
|
|
|
| 1004 |
{"year":"2023","title":"Large Language Models Can Be Used to Estimate the Latent Positions of Politicians","authors":["PY Wu, J Nagler, JA Tucker, S Messing"],"snippet":"Existing approaches to estimating politicians’ latent positions along specific dimensions often fail when relevant data is limited. We leverage the embedded knowledge in generative large language models (LLMs) to address this challenge …","url":["https://www.patrickywu.com/PatrickYWu_JMP1_LaMPscores.pdf"]}
|
| 1005 |
{"year":"2023","title":"Large language models in medicine","authors":["AJ Thirunavukarasu, DSJ Ting, K Elangovan… - Nature Medicine, 2023"],"snippet":"Large language models (LLMs) can respond to free-text queries without being specifically trained in the task in question, causing excitement and concern about their use in healthcare settings. ChatGPT is a generative artificial intelligence (AI) …","url":["https://www.nature.com/articles/s41591-023-02448-8"]}
|
| 1006 |
{"year":"2023","title":"Large Language Models Need Symbolic AI","authors":["K Hammond, D Leake - 2023"],"snippet":"… GPT-3 was trained on an extensive dataset, based on a version of the CommonCrawl dataset (with almost a trillion words) and additional reference sources. Given tasks and few-shot demonstrations provided to the system as text …","url":["https://ceur-ws.org/Vol-3432/paper17.pdf"]}
|
| 1007 |
+
{"year":"2023","title":"Large Language Models","authors":["M McTear, M Ashurkina - Transforming Conversational AI: Exploring the Power …, 2024","MR Douglas - arXiv preprint arXiv:2307.05782, 2023"],"snippet":"Artificial intelligence is making spectacular progress, and one of the best examples is the development of large language models (LLMs) such as OpenAI's GPT series. In these lectures, written for readers with a background in mathematics or physics …","url":["https://arxiv.org/pdf/2307.05782","https://link.springer.com/chapter/10.1007/979-8-8688-0110-5_4"]}
|
| 1008 |
{"year":"2023","title":"Large Language Models' Understanding of Math: Source Criticism and Extrapolation","authors":["R Yousefzadeh, X Cao - arXiv preprint arXiv:2311.07618, 2023"],"snippet":"… Common Crawl is particularly interesting. The GPT-f model developed for mathematical learning was trained on 300 billion tokens from CommonCrawl… The size of the most recent CommonCrawl is 390 TiB including the contents of 3.1 billion …","url":["https://arxiv.org/pdf/2311.07618"]}
|
| 1009 |
{"year":"2023","title":"Large Language Models, scientific knowledge and factuality: A systematic analysis in antibiotic discovery","authors":["M Wysocka, O Wysocki, M Delmas, V Mutel, A Freitas - arXiv preprint arXiv …, 2023"],"snippet":"Inferring over and extracting information from Large Language Models (LLMs) trained on a large corpus of scientific literature can potentially drive a new era in biomedical research, reducing the barriers for accessing existing medical evidence …","url":["https://arxiv.org/pdf/2305.17819"]}
|
| 1010 |
{"year":"2023","title":"Large Scale Fine-Tuned Transformers Models Application for Business Names Generation","authors":["M Lukauskas, T Rasymas, M Minelga, D Vaitmonas - Computing and Informatics, 2023"],"snippet":"… on larger datasets, leading to pre-trained systems such as BERT (Bidirectional Encoder Representations from Transformers) and GPT (Generative Pre-trained Transformer), which have been trained on large language datasets such as the …","url":["https://www.cai.sk/ojs/index.php/cai/article/download/2023_3_525/1228"]}
|
|
|
|
| 1457 |
{"year":"2023","title":"Subject-verb Agreement with Seq2Seq Transformers: Bigger Is Better, but Still Not Best","authors":["MA Wilson, Z Zhou, R Frank - Proceedings of the Society for Computation in …, 2023"],"snippet":"Past work (Linzen et al., 2016; Goldberg, 2019, ao) has used the performance of neural network language models on subject-verb agreement to argue that such models possess structure-sensitive grammatical knowledge. We investigate what …","url":["https://scholarworks.umass.edu/cgi/viewcontent.cgi?article=1303&context=scil"]}
|
| 1458 |
{"year":"2023","title":"Submission of USTC's system for the IWSLT 2023-Offline Speech Translation Track","authors":["X Zhou, J Cui, Z Ye, Y Wang, L Xu, H Zhang, W Zhang… - Proceedings of the 20th …, 2023"],"snippet":"This paper describes the submissions of the research group USTC-NELSLIP to the 2023 IWSLT Offline Speech Translation competition, which involves translating spoken English into written Chinese. We utilize both cascaded models and end-to-end …","url":["https://aclanthology.org/2023.iwslt-1.15.pdf"]}
|
| 1459 |
{"year":"2023","title":"Subversion of the Human Aura: A Crisis in Representation","authors":["NK Hayles - American Literature, 2023"],"snippet":"The human aura is now being subverted by a variety of simulacra. OpenAI’s language-generation program GPT-3 illustrates the challenges of interpreting algorithmic-generated texts. This article advocates interpretive strategies that …","url":["https://read.dukeupress.edu/american-literature/article-abstract/doi/10.1215/00029831-10575063/344236"]}
|
| 1460 |
+
{"year":"2023","title":"Subword-based Neural Machine Translation for low-resource fusion languages","authors":["A Nürnberger, EW De Luca, M Gasser","AM Gezmu - 2023"],"snippet":"Neural approaches, which are currently state-of-the-art in many areas, have contributed significantly to the exciting advancements in machine translation. However, Neural Machine Translation (NMT) requires a substantial quantity and …","url":["https://opendata.uni-halle.de/bitstream/1981185920/105783/1/Gezmu_Andargachew_Mekonnen_Dissertation_2023.pdf","https://repo.bibliothek.uni-halle.de/bitstream/1981185920/105783/1/Gezmu_Andargachew_Mekonnen_Dissertation_2023.pdf"]}
|
| 1461 |
{"year":"2023","title":"Suicidal Text Detection in Social Media","authors":["MP Karthikeyan, I Ajay, R Magesh, G Saran - 2023"],"snippet":"This system is developed with the aim of providing and insight information of people who are personally disturbed by the factors of either their personal life or family background or bully at school or work pressure. With the help of people’s online …","url":["https://www.ijrar.org/papers/IJRAR23B1004.pdf"]}
|
| 1462 |
{"year":"2023","title":"Suicide risk assessment using word-level model with dictionary-based risky posts selection","authors":["YS Tsai, ALP Chen - Multimedia Tools and Applications, 2023"],"snippet":"Suicide is a serious issue around the world and is a leading cause of death in US. In the past 20 years, the suicide rate has seen a significant increase of 35%. With the rapid development of information technology, more and more people begin to use …","url":["https://link.springer.com/article/10.1007/s11042-023-16361-2"]}
|
| 1463 |
{"year":"2023","title":"SuperDialseg: A Large-scale Dataset for Supervised Dialogue Segmentation","authors":["J Jiang, C Dong, A Aizawa, S Kurohashi - arXiv preprint arXiv:2305.08371, 2023"],"snippet":"… For TextTiling+Glove, we used the version pretrained with 42 billion tokens of web data from Common Crawl.For GreedySeg and CSM, we corrected some inconsistencies in their open-sourced codes with respect to their original published …","url":["https://arxiv.org/pdf/2305.08371"]}
|
|
|
|
| 1705 |
{"year":"2023","title":"Vision-Language Models for Vision Tasks: A Survey","authors":["J Zhang, J Huang, S Jin, S Lu - arXiv preprint arXiv:2304.00685, 2023"],"snippet":"Most visual recognition studies rely heavily on crowd-labelled data in deep neural networks (DNNs) training, and they usually train a DNN for each single visual recognition task, leading to a laborious and time-consuming visual recognition …","url":["https://arxiv.org/pdf/2304.00685"]}
|
| 1706 |
{"year":"2023","title":"ViSoBERT: A Pre-Trained Language Model for Vietnamese Social Media Text Processing","authors":["QN Nguyen, TC Phan, DV Nguyen, K Van Nguyen - arXiv preprint arXiv:2310.11166, 2023"],"snippet":"English and Chinese, known as resource-rich languages, have witnessed the strong development of transformer-based language models for natural language processing tasks. Although Vietnam has approximately 100M people speaking …","url":["https://arxiv.org/pdf/2310.11166"]}
|
| 1707 |
{"year":"2023","title":"Visual experience modulates the sensitivity to the distributional history of words in natural language","authors":["G Anceresi, D Gatti, M Marelli, T Vecchi, L Rinaldi - 2023"],"snippet":"Different experiential traces (ie, linguistic, motor and perceptual) are likely contributing to the organization of human semantic knowledge. Here, we aimed to address this issue by investigating whether visual experience may affect the …","url":["https://psyarxiv.com/jqa9k/download?format=pdf"]}
|
| 1708 |
+
{"year":"2023","title":"Visual Question Answering: A Survey on Techniques and Common Trends in Recent Literature","authors":["ACAM de Faria, FC Bastos, JVNA da Silva, VL Fabris… - arXiv preprint arXiv …, 2023","CFG dos Sants, F de Castro Bastos, ACAM de Faria… - 2023"],"snippet":"… More technically, this new architecture has a language model based on Text-to-Text transformer and uses the base of T5 [72] because of its extensive pre-training data using Common Crawl, that is 750GB of cleaned English text data. To complement, a …","url":["https://arxiv.org/pdf/2305.11033","https://www.researchsquare.com/article/rs-3015858/latest.pdf"]}
|
| 1709 |
{"year":"2023","title":"Visual-Semantic Learning","authors":["C Yin - 2023"],"snippet":"… of 15 words, while the questions with length smaller than 15 were padded with zeros to the length of 15 (10 for the MSVD-QA dataset), and each word in the questions was represented as a 300D vectors using the GloVe word embedding [214] …","url":["https://search.proquest.com/openview/f5cf7cabc3e1cbcb0a2fece160ce1319/1?pq-origsite=gscholar&cbl=18750&diss=y"]}
|
| 1710 |
{"year":"2023","title":"Visualisation and Classification of Phishing URL using Ensemble Learning Algorithms and Hyper-Parameter Tuning","authors":["G Agarwal, C Goel, K Jindal, T Subbulakshmi - 2023 Third International Conference …, 2023"],"snippet":"… Alexa and Common Crawl were used to gather legitimate URLs. Lexical features, host-based features, and correlated feature groups are the three categories used to classify the features. Lexical features are textual aspects of the URL rather than the …","url":["https://ieeexplore.ieee.org/abstract/document/10176642/"]}
|
| 1711 |
{"year":"2023","title":"Vocabulary-free Image Classification","authors":["A Conti, E Fini, M Mancini, P Rota, Y Wang, E Ricci - arXiv preprint arXiv:2306.00917, 2023"],"snippet":"Recent advances in large vision-language models have revolutionized the image classification paradigm. Despite showing impressive zero-shot capabilities, a pre-defined set of categories, aka the vocabulary, is assumed at test time for composing the …","url":["https://arxiv.org/pdf/2306.00917"]}
|
2024.jsonl
CHANGED
|
The diff for this file is too large to render.
See raw diff
|
|
|
2025.jsonl
ADDED
|
The diff for this file is too large to render.
See raw diff
|
|
|