Update README.md
Browse files
README.md
CHANGED
@@ -22,6 +22,16 @@ dataset_info:
|
|
22 |
download_size: 382068275
|
23 |
dataset_size: 857168232
|
24 |
---
|
25 |
-
#
|
26 |
|
27 |
-
[
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
22 |
download_size: 382068275
|
23 |
dataset_size: 857168232
|
24 |
---
|
25 |
+
# ArXiv papers from RedPajama-Data originally published in February 2023
|
26 |
|
27 |
+
We collect the ArXiv papers released shortly before the training data cutoff date for the [OpenLLaMA models](https://huggingface.co/openlm-research/open_llama_7b).
|
28 |
+
|
29 |
+
The OpenLLaMA models (V1) have been trained on [RedPajama data](https://huggingface.co/datasets/togethercomputer/RedPajama-Data-1T).
|
30 |
+
The last batch of ArXiv papers included in this dataset are papers published in February 2023.
|
31 |
+
To get the members close to the cutoff data, we collect the 13,155 papers published in "2302" as part of the training dataset.
|
32 |
+
We process the raw LateX files using this [script](https://github.com/togethercomputer/RedPajama-Data/blob/rp_v1/data_prep/arxiv/run_clean.py).
|
33 |
+
|
34 |
+
This dataset has been used as source for 'member' documents to develop (document-level) MIAs against LLMs using data collected shortly before (member) and after (non-member) the training cutoff date for the target model ([the suite of OpenLLaMA models](https://huggingface.co/openlm-research/open_llama_7b)).
|
35 |
+
For more details and results see the section of Regression Discontiuity Design (RDD) in the paper ["SoK: Membership Inference Attacks on LLMs are Rushing Nowhere (and How to Fix It)"](https://arxiv.org/pdf/2406.17975).
|
36 |
+
|
37 |
+
For non-members for the RDD setup, we refer to our [Github repo](https://github.com/computationalprivacy/mia_llms_benchmark/tree/main/document_level).
|