Datasets:
Tasks:
Sentence Similarity
Modalities:
Text
Formats:
json
Sub-tasks:
semantic-similarity-classification
Languages:
English
Size:
100K - 1M
ArXiv:
License:
Commit
·
36472f8
1
Parent(s):
5598cd9
Update README.md
Browse files
README.md
CHANGED
|
@@ -4,6 +4,11 @@ language:
|
|
| 4 |
- en
|
| 5 |
paperswithcode_id: embedding-data/SPECTER
|
| 6 |
pretty_name: SPECTER
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 7 |
---
|
| 8 |
|
| 9 |
# Dataset Card for "SPECTER"
|
|
@@ -41,39 +46,43 @@ pretty_name: SPECTER
|
|
| 41 |
|
| 42 |
### Dataset Summary
|
| 43 |
|
| 44 |
-
|
| 45 |
-
A new method to generate document-level embedding of scientific documents based on
|
| 46 |
-
pretraining a Transformer language model on a powerful signal of document-level relatedness: the citation graph.
|
| 47 |
-
Unlike existing pretrained language models, SPECTER can be easily applied to
|
| 48 |
-
downstream applications without task-specific fine-tuning.
|
| 49 |
|
| 50 |
Disclaimer: The team releasing SPECTER did not upload the dataset to the Hub and did not write a dataset card.
|
| 51 |
These steps were done by the Hugging Face team.
|
| 52 |
|
| 53 |
-
### Supported Tasks and Leaderboards
|
| 54 |
-
|
| 55 |
-
[More Information Needed](https://github.com/allenai/specter)
|
| 56 |
-
|
| 57 |
-
### Languages
|
| 58 |
-
|
| 59 |
-
[More Information Needed](https://github.com/allenai/specter)
|
| 60 |
-
|
| 61 |
## Dataset Structure
|
| 62 |
-
|
| 63 |
-
A text file with ids of the documents you want to embed and a json metadata file
|
| 64 |
-
consisting of the title and abstract information.
|
| 65 |
-
Sample files are provided in the `data/` directory to get you started.
|
| 66 |
-
Input data format is according to:
|
| 67 |
|
| 68 |
-
|
| 69 |
|
| 70 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 71 |
|
| 72 |
-
|
| 73 |
-
|
| 74 |
-
|
| 75 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 76 |
```
|
|
|
|
|
|
|
|
|
|
|
|
|
| 77 |
|
| 78 |
### Curation Rationale
|
| 79 |
|
|
@@ -129,24 +138,7 @@ metadata.json format:
|
|
| 129 |
|
| 130 |
### Citation Information
|
| 131 |
|
| 132 |
-
```
|
| 133 |
-
@inproceedings{specter2020cohan,
|
| 134 |
-
title={{SPECTER: Document-level Representation Learning using Citation-informed Transformers}},
|
| 135 |
-
author={Arman Cohan and Sergey Feldman and Iz Beltagy and Doug Downey and Daniel S. Weld},
|
| 136 |
-
booktitle={ACL},
|
| 137 |
-
year={2020}
|
| 138 |
-
}
|
| 139 |
-
|
| 140 |
-
```
|
| 141 |
-
SciDocs benchmark
|
| 142 |
-
|
| 143 |
-
SciDocs evaluation framework consists of a suite of evaluation tasks designed for document-level tasks.
|
| 144 |
-
|
| 145 |
-
Link to SciDocs:
|
| 146 |
-
|
| 147 |
-
- [https://github.com/allenai/scidocs](https://github.com/allenai/scidocs)
|
| 148 |
|
| 149 |
|
| 150 |
### Contributions
|
| 151 |
|
| 152 |
-
Thanks to [@armancohan](https://github.com/armancohan), [@sergeyf](https://github.com/sergeyf), [@haroldrubio](https://github.com/haroldrubio), [@jinamshah](https://github.com/jinamshah) for adding this dataset.
|
|
|
|
| 4 |
- en
|
| 5 |
paperswithcode_id: embedding-data/SPECTER
|
| 6 |
pretty_name: SPECTER
|
| 7 |
+
task_categories:
|
| 8 |
+
- sentence-similarity
|
| 9 |
+
- paraphrase-mining
|
| 10 |
+
task_ids:
|
| 11 |
+
- semantic-similarity-classification
|
| 12 |
---
|
| 13 |
|
| 14 |
# Dataset Card for "SPECTER"
|
|
|
|
| 46 |
|
| 47 |
### Dataset Summary
|
| 48 |
|
| 49 |
+
Dataset containing triplets (three sentences): anchor, positive, and negative. Contains titles of papers.
|
|
|
|
|
|
|
|
|
|
|
|
|
| 50 |
|
| 51 |
Disclaimer: The team releasing SPECTER did not upload the dataset to the Hub and did not write a dataset card.
|
| 52 |
These steps were done by the Hugging Face team.
|
| 53 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 54 |
## Dataset Structure
|
| 55 |
+
Each example in the dataset contains triplets of equivalent sentences and is formatted as a dictionary with the key "set" and a list with the sentences as "value".
|
|
|
|
|
|
|
|
|
|
|
|
|
| 56 |
|
| 57 |
+
Each example is a dictionary with a key, "set", containing a list of three sentences (anchor, positive, and negative):
|
| 58 |
|
| 59 |
```
|
| 60 |
+
{"set": [anchor, positive, negative]}
|
| 61 |
+
{"set": [anchor, positive, negative]}
|
| 62 |
+
...
|
| 63 |
+
{"set": [anchor, positive, negative]}
|
| 64 |
+
```
|
| 65 |
+
This dataset is useful for training Sentence Transformers models. Refer to the following post on how to train models using triplets.
|
| 66 |
|
| 67 |
+
### Usage Example
|
| 68 |
+
Install the 🤗 Datasets library with `pip install datasets` and load the dataset from the Hub with:
|
| 69 |
+
```python
|
| 70 |
+
from datasets import load_dataset
|
| 71 |
+
dataset = load_dataset("embedding-data/SPECTER")
|
| 72 |
+
```
|
| 73 |
+
The dataset is loaded as a `DatasetDict` and has the format:
|
| 74 |
+
```python
|
| 75 |
+
DatasetDict({
|
| 76 |
+
train: Dataset({
|
| 77 |
+
features: ['set'],
|
| 78 |
+
num_rows: 684100
|
| 79 |
+
})
|
| 80 |
+
})
|
| 81 |
```
|
| 82 |
+
Review an example `i` with:
|
| 83 |
+
```python
|
| 84 |
+
dataset["train"][i]["set"]
|
| 85 |
+
|
| 86 |
|
| 87 |
### Curation Rationale
|
| 88 |
|
|
|
|
| 138 |
|
| 139 |
### Citation Information
|
| 140 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 141 |
|
| 142 |
|
| 143 |
### Contributions
|
| 144 |
|
|
|