TimSchopf commited on
Commit
1a6fc4c
ยท
1 Parent(s): 792b171

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +23 -5
README.md CHANGED
@@ -11,9 +11,11 @@ pipeline_tag: text-classification
11
  tags:
12
  - science
13
  - scholarly
 
 
14
  ---
15
 
16
- # NLP Concept Classifier
17
 
18
  This is a fine-tuned BERT-based language model to classify NLP-related research papers according to concepts included in the [NLP taxonomy](#nlp-taxonomy).
19
  It is a multi-label classifier that can predict concepts from all levels of the NLP taxonomy.
@@ -21,11 +23,11 @@ If the model identifies a lower-level concept, it did learn to predict both the
21
  The model is fine-tuned on a weakly labeled dataset of 178,521 scientific papers from the ACL Anthology, the arXiv cs.CL domain, and Scopus.
22
  Prior to fine-tuning, the model is initialized with weights from [allenai/specter2_base](https://huggingface.co/allenai/specter2_base).
23
 
24
- ๐Ÿ“„ Paper: [Exploring the Landscape of Natural Language Processing Research (RANLP 2023)](https://arxiv.org/abs/2307.10652).
25
 
26
  ๐Ÿ’ป Code: [https://github.com/sebischair/Exploring-NLP-Research](https://github.com/sebischair/Exploring-NLP-Research)
27
 
28
- ๐Ÿ’พ Data: [https://github.com/sebischair/Exploring-NLP-Research/blob/main/emnlp22_papers_with_nlp_taxonomy_labels.csv](https://github.com/sebischair/Exploring-NLP-Research/blob/main/emnlp22_papers_with_nlp_taxonomy_labels.csv)
29
 
30
  The dataset contains the titles and abstracts of all EMNLP 22 papers, which were manually labeled according to the NLP taxonomy.
31
 
@@ -37,14 +39,15 @@ The dataset contains the titles and abstracts of all EMNLP 22 papers, which were
37
 
38
  ## How to use the fine-tuned model
39
 
 
40
  ```python
41
  from typing import List
42
  import torch
43
  from torch.utils.data import DataLoader
44
  from transformers import BertForSequenceClassification, AutoTokenizer
45
  # load model and tokenizer
46
- tokenizer = AutoTokenizer.from_pretrained('TimSchopf/specter2_nlp_classifier')
47
- model = BertForSequenceClassification.from_pretrained('TimSchopf/specter2_nlp_classifier')
48
 
49
  # prepare data
50
  papers = [{'title': 'Attention Is All You Need', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.'},
@@ -130,6 +133,21 @@ def predict_nlp_concepts(model, tokenizer, texts: List[str], batch_size=8, devic
130
  # predict concepts of NLP papers
131
  numerical_predictions, class_name_predictions = predict_nlp_concepts(model=model, tokenizer=tokenizer, texts=title_abs)
132
  ```
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
133
  ## Evaluation Results
134
 
135
  The model was evaluated on a manually labeled test set of 828 different EMNLP 2022 papers. The following shows the average evaluation results for classifying papers according to the NLP taxonomy on three different training runs. Since the distribution of classes is very unbalanced, we report micro scores.
 
11
  tags:
12
  - science
13
  - scholarly
14
+ datasets:
15
+ - TimSchopf/nlp_taxonomy_data
16
  ---
17
 
18
+ # NLP Taxonomy Classifier
19
 
20
  This is a fine-tuned BERT-based language model to classify NLP-related research papers according to concepts included in the [NLP taxonomy](#nlp-taxonomy).
21
  It is a multi-label classifier that can predict concepts from all levels of the NLP taxonomy.
 
23
  The model is fine-tuned on a weakly labeled dataset of 178,521 scientific papers from the ACL Anthology, the arXiv cs.CL domain, and Scopus.
24
  Prior to fine-tuning, the model is initialized with weights from [allenai/specter2_base](https://huggingface.co/allenai/specter2_base).
25
 
26
+ ๐Ÿ“„ Paper: [Exploring the Landscape of Natural Language Processing Research (RANLP 2023)](https://aclanthology.org/2023.ranlp-1.111).
27
 
28
  ๐Ÿ’ป Code: [https://github.com/sebischair/Exploring-NLP-Research](https://github.com/sebischair/Exploring-NLP-Research)
29
 
30
+ ๐Ÿ’พ Data: [https://huggingface.co/datasets/TimSchopf/nlp_taxonomy_data](https://huggingface.co/datasets/TimSchopf/nlp_taxonomy_data)
31
 
32
  The dataset contains the titles and abstracts of all EMNLP 22 papers, which were manually labeled according to the NLP taxonomy.
33
 
 
39
 
40
  ## How to use the fine-tuned model
41
 
42
+ ### Get predictions by loading the model directly
43
  ```python
44
  from typing import List
45
  import torch
46
  from torch.utils.data import DataLoader
47
  from transformers import BertForSequenceClassification, AutoTokenizer
48
  # load model and tokenizer
49
+ tokenizer = AutoTokenizer.from_pretrained('TimSchopf/nlp_taxonomy_classifier')
50
+ model = BertForSequenceClassification.from_pretrained('TimSchopf/nlp_taxonomy_classifier')
51
 
52
  # prepare data
53
  papers = [{'title': 'Attention Is All You Need', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.'},
 
133
  # predict concepts of NLP papers
134
  numerical_predictions, class_name_predictions = predict_nlp_concepts(model=model, tokenizer=tokenizer, texts=title_abs)
135
  ```
136
+ ### Use a pipeline to get predictions
137
+
138
+ ```python
139
+ from transformers import pipeline
140
+
141
+ pipe = pipeline("text-classification", model="TimSchopf/nlp_taxonomy_classifier")
142
+
143
+ # prepare data
144
+ papers = [{'title': 'Attention Is All You Need', 'abstract': 'The dominant sequence transduction models are based on complex recurrent or convolutional neural networks in an encoder-decoder configuration. The best performing models also connect the encoder and decoder through an attention mechanism. We propose a new simple network architecture, the Transformer, based solely on attention mechanisms, dispensing with recurrence and convolutions entirely. Experiments on two machine translation tasks show these models to be superior in quality while being more parallelizable and requiring significantly less time to train. Our model achieves 28.4 BLEU on the WMT 2014 English-to-German translation task, improving over the existing best results, including ensembles by over 2 BLEU. On the WMT 2014 English-to-French translation task, our model establishes a new single-model state-of-the-art BLEU score of 41.8 after training for 3.5 days on eight GPUs, a small fraction of the training costs of the best models from the literature. We show that the Transformer generalizes well to other tasks by applying it successfully to English constituency parsing both with large and limited training data.'},
145
+ {'title': 'SimCSE: Simple Contrastive Learning of Sentence Embeddings', 'abstract': 'This paper presents SimCSE, a simple contrastive learning framework that greatly advances state-of-the-art sentence embeddings. We first describe an unsupervised approach, which takes an input sentence and predicts itself in a contrastive objective, with only standard dropout used as noise. This simple method works surprisingly well, performing on par with previous supervised counterparts. We find that dropout acts as minimal data augmentation, and removing it leads to a representation collapse. Then, we propose a supervised approach, which incorporates annotated pairs from natural language inference datasets into our contrastive learning framework by using "entailment" pairs as positives and "contradiction" pairs as hard negatives. We evaluate SimCSE on standard semantic textual similarity (STS) tasks, and our unsupervised and supervised models using BERT base achieve an average of 76.3% and 81.6% Spearmans correlation respectively, a 4.2% and 2.2% improvement compared to the previous best results. We also show -- both theoretically and empirically -- that the contrastive learning objective regularizes pre-trained embeddings anisotropic space to be more uniform, and it better aligns positive pairs when supervised signals are available.'}]
146
+ # concatenate title and abstract with [SEP] token
147
+ title_abs = [d['title'] + tokenizer.sep_token + (d.get('abstract') or '') for d in papers]
148
+
149
+ pipe(title_abs, return_all_scores=True)
150
+ ```
151
  ## Evaluation Results
152
 
153
  The model was evaluated on a manually labeled test set of 828 different EMNLP 2022 papers. The following shows the average evaluation results for classifying papers according to the NLP taxonomy on three different training runs. Since the distribution of classes is very unbalanced, we report micro scores.