yaojin17
commited on
Commit
·
e88eeaf
1
Parent(s):
f28eeb8
update README
Browse files
README.md
CHANGED
@@ -69,4 +69,41 @@ configs:
|
|
69 |
- split: retain
|
70 |
path: github/retain-*
|
71 |
license: apache-2.0
|
72 |
-
---
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
69 |
- split: retain
|
70 |
path: github/retain-*
|
71 |
license: apache-2.0
|
72 |
+
---
|
73 |
+
|
74 |
+
# 📖 unlearn_dataset
|
75 |
+
The unlearn_dataset serves as a benchmark for evaluating unlearning methodologies in pre-trained large language models across diverse domains, including arXiv, GitHub.
|
76 |
+
|
77 |
+
## 🔍 Loading the datasets
|
78 |
+
|
79 |
+
To load the dataset:
|
80 |
+
|
81 |
+
```python
|
82 |
+
from datasets import load_dataset
|
83 |
+
|
84 |
+
dataset = load_dataset("llmunlearn/unlearn_dataset", name="arxiv", split="forget")
|
85 |
+
```
|
86 |
+
* Available configuration names and corresponding splits:
|
87 |
+
- `arxiv`: `forget, approximate, retain`
|
88 |
+
- `github`: `forget, approximate, retain`
|
89 |
+
- `general`: `evaluation, retain`
|
90 |
+
|
91 |
+
|
92 |
+
## 🛠️ Codebase
|
93 |
+
|
94 |
+
For evaluating unlearning methods on our datasets, visit our [GitHub repository](https://github.com/yaojin17/Unlearning_LLM).
|
95 |
+
|
96 |
+
|
97 |
+
|
98 |
+
## ⭐ Citing our Work
|
99 |
+
|
100 |
+
If you find our codebase or dataset useful, please consider citing our paper:
|
101 |
+
|
102 |
+
```bib
|
103 |
+
@article{yao2024machine,
|
104 |
+
title={Machine Unlearning of Pre-trained Large Language Models},
|
105 |
+
author={Yao, Jin and Chien, Eli and Du, Minxin and Niu, Xinyao and Wang, Tianhao and Cheng, Zezhou and Yue, Xiang},
|
106 |
+
journal={arXiv preprint arXiv:2402.15159},
|
107 |
+
year={2024}
|
108 |
+
}
|
109 |
+
```
|