ChenyuHeidiZhang
commited on
Commit
·
066db5f
1
Parent(s):
ed843d2
update more details
Browse files
README.md
CHANGED
@@ -44,12 +44,14 @@ It also contains data from more domains:
|
|
44 |
| Dataset | Size | Comments + Scores | Preferences | Number of Domains |
|
45 |
| -------------------- | ---- | ------------------ | -------------| ------------------ |
|
46 |
| SHP-2 | 4.8M | Yes | Yes | 129 (70 from Reddit, 59 from StackExchange) |
|
|
|
47 |
| ELI5 | 270K | Yes | No | 3 |
|
48 |
|
49 |
|
50 |
## Data Structure
|
51 |
|
52 |
-
There are
|
|
|
53 |
Here's how to get the data using Huggingface's `datasets` library:
|
54 |
|
55 |
```python
|
@@ -60,6 +62,9 @@ dataset = load_dataset("stanfordnlp/shp-2")
|
|
60 |
|
61 |
# Load one of the subreddits
|
62 |
dataset = load_dataset("stanfordnlp/shp-2", data_dir="reddit/askculinary")
|
|
|
|
|
|
|
63 |
```
|
64 |
|
65 |
Here's an example from `reddit/askculinary/train.json`:
|
@@ -100,49 +105,49 @@ where the fields are:
|
|
100 |
- ```seconds_difference```: how many seconds after the less preferred comment the more preferred one was created (will always be >= 0) (integer)
|
101 |
- ```score_ratio```: the ratio of the more preferred comment's score to the less preferred comment's score (will be >= 1) (float)
|
102 |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
103 |
|
104 |
## Dataset Design
|
105 |
|
106 |
### Domain Selection
|
|
|
107 |
|
108 |
-
The data is sourced from Reddit, which
|
109 |
-
For example, the `askculinary` subreddit is where users ask cooking-related questions and are answered by other users.
|
110 |
|
111 |
-
SHP contains a train, validation, and test split for comments scraped from
|
112 |
1. whether they were well-known (subscriber count >= 100K)
|
113 |
2. whether posts were expected to pose a question or instruction
|
114 |
3. whether responses were valued based on how *helpful* they were
|
115 |
4. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`)
|
116 |
|
117 |
The train/validation/test splits were created by splitting the post IDs of a subreddit in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits.
|
118 |
-
Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5
|
119 |
-
|
120 |
-
| subreddit | train | validation | test | total |
|
121 |
-
| ------------------ | -------: | ---------: | ---: | ----: |
|
122 |
-
| askacademia | 31450 | 2095 | 1708 | 35253 |
|
123 |
-
| askanthropology | 3910 | 203 | 268 | 4381 |
|
124 |
-
| askbaking | 44007 | 2096 | 1544 | 47647 |
|
125 |
-
| askcarguys | 3227 | 159 | 117 | 3503 |
|
126 |
-
| askculinary | 45710 | 2094 | 2563 | 50367 |
|
127 |
-
| askdocs | 6449 | 315 | 455 | 7219 |
|
128 |
-
| askengineers | 57096 | 3154 | 2638 | 62888 |
|
129 |
-
| askhistorians | 3264 | 113 | 164 | 3541 |
|
130 |
-
| askhr | 8295 | 641 | 395 | 9331 |
|
131 |
-
| askphilosophy | 10307 | 608 | 677 | 11592 |
|
132 |
-
| askphysics | 7364 | 409 | 587 | 8360 |
|
133 |
-
| askscience | 13316 | 899 | 977 | 15192 |
|
134 |
-
| asksciencefiction | 29382 | 1576 | 1987 | 32945 |
|
135 |
-
| asksocialscience | 2706 | 147 | 188 | 3041 |
|
136 |
-
| askvet | 3300 | 170 | 224 | 3694 |
|
137 |
-
| changemyview | 38173 | 1637 | 1836 | 41646 |
|
138 |
-
| explainlikeimfive | 19592 | 1014 | 1070 | 21676 |
|
139 |
-
| legaladvice | 21170 | 1106 | 1011 | 23287 |
|
140 |
-
| ALL | 348718 | 18436 | 18409 | 385563 |
|
141 |
|
142 |
### Data Selection
|
|
|
143 |
|
144 |
The score of a post/comment is 1 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets.
|
145 |
-
The value of a score is relative; in
|
146 |
Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure, which is why using timestamp information is essential when inferring preferences.
|
147 |
|
148 |
Given a post P and two comments (A,B) we only included the preference A > B in the dataset if
|
@@ -159,13 +164,15 @@ Reddit makes it very difficult to get anything beyond the top 1000 posts for eac
|
|
159 |
We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using Reddit's search function to get up to 7500 unique post IDs per subreddit.
|
160 |
|
161 |
|
162 |
-
### Preprocessing
|
|
|
163 |
|
164 |
We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded (e.g., "CMV" to "Change my view that").
|
165 |
In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept).
|
166 |
|
167 |
|
168 |
## Building a Preference Model
|
|
|
169 |
|
170 |
### Finetuning
|
171 |
|
@@ -176,7 +183,7 @@ If you want to finetune a model to predict human preferences (e.g., for NLG eval
|
|
176 |
To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
|
177 |
If this is still over 512 tokens, simply skip the example.
|
178 |
2. **Use a sufficiently large model.**
|
179 |
-
Finetuning a single FLAN-T5-xl model across
|
180 |
3. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
|
181 |
4. **Train for fewer epochs.** The InstructGPT paper paper suggests training a reward model for only 1 epoch.
|
182 |
Since the same comment appears in multiple preferences, it is easy to overfit to the data.
|
@@ -184,7 +191,7 @@ If you want to finetune a model to predict human preferences (e.g., for NLG eval
|
|
184 |
Preferences with a large `score_ratio` (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
|
185 |
The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
|
186 |
|
187 |
-
### Evaluating
|
188 |
|
189 |
Since it is easier to predict strongly-held preferences than weakly-held ones, instead of reporting a single accuracy value, we recommend reporting a performance curve as a function of the `score_ratio`.
|
190 |
For example, here is the accuracy curve for a FLAN-T5-xl model trained on the askculinary data using the suggestions above.
|
@@ -193,7 +200,7 @@ The orange line is from finetuning only on preferences with a 2+ score ratio and
|
|
193 |

|
194 |
|
195 |
We see that finetuning on less -- but higher quality -- data leads to higher accuracies on test data with a score ratio below 3.5, with no real downsides!
|
196 |
-
Note that any examples whose inputs did not fit within the token limit were left out of the experiment, since the model could not be expected to handle them.
|
197 |
|
198 |
### SteamSHP - An Open-Source Preference Model
|
199 |
|
|
|
44 |
| Dataset | Size | Comments + Scores | Preferences | Number of Domains |
|
45 |
| -------------------- | ---- | ------------------ | -------------| ------------------ |
|
46 |
| SHP-2 | 4.8M | Yes | Yes | 129 (70 from Reddit, 59 from StackExchange) |
|
47 |
+
| SHP | 385K | Yes | Yes | 18 |
|
48 |
| ELI5 | 270K | Yes | No | 3 |
|
49 |
|
50 |
|
51 |
## Data Structure
|
52 |
|
53 |
+
There are 2 directories, one for reddit and one for stackexchange. There are 70 subdirectories under `reddit/`, one for each subreddit, and 59 subdirectories under `stackexchange/`, one for each stackexchange site.
|
54 |
+
Each subdirectory contains a JSONL file for the training, validation, and test data.
|
55 |
Here's how to get the data using Huggingface's `datasets` library:
|
56 |
|
57 |
```python
|
|
|
62 |
|
63 |
# Load one of the subreddits
|
64 |
dataset = load_dataset("stanfordnlp/shp-2", data_dir="reddit/askculinary")
|
65 |
+
|
66 |
+
# Load one of the stackexchange sites
|
67 |
+
dataset = load_dataset("stanfordnlp/shp-2", data_dir="stackexchange/stack_academia")
|
68 |
```
|
69 |
|
70 |
Here's an example from `reddit/askculinary/train.json`:
|
|
|
105 |
- ```seconds_difference```: how many seconds after the less preferred comment the more preferred one was created (will always be >= 0) (integer)
|
106 |
- ```score_ratio```: the ratio of the more preferred comment's score to the less preferred comment's score (will be >= 1) (float)
|
107 |
|
108 |
+
Here's an example from `stackexchange/stack_academia/validation.json`:
|
109 |
+
```
|
110 |
+
{
|
111 |
+
`id`:"87434_87453",
|
112 |
+
`post_id`:"87393",
|
113 |
+
`domain`:"academia_validation",
|
114 |
+
`history`:"What to answer an author asking me if I reviewed his/her paper? <sep> Suppose I review someone's paper anonymously, the paper gets accepted, and a year or two later we meet e.g. in a social event and he/she asks me "did you review my paper?". What should I answer? There are several sub-questions here: Suppose the review was a good one, and the paper eventualy got accepted, so I do not mind telling that I was the reviewer. Is there any rule/norm prohibiting me from telling the truth? Suppose the review was not so good, so I do not want to reveal. What can I answer? If I just say "I am not allowed to tell you", this immediately reveals me... On the other hand, I do not want to lie. What options do I have?",
|
115 |
+
`created_at_utc_A`:"1490989560.0",
|
116 |
+
`created_at_utc_B`:"1491012608.0",
|
117 |
+
`score_A`:"2",
|
118 |
+
`score_B`:"5",
|
119 |
+
`human_ref_A`:"I am aware of at least one paper where a referee went out of cover (after the review process of course) and was explicitly mentioned in a later paper: <blockquote> X and Y thank Z, who as the anonymous referee was kind enough to point out the error (and later became non-anonymous). </blockquote> so it is sure fine to answer truthfully that yes you did review, but only if you wish of course (and most likely if you have been helpful and the authors of the paper responsive).",
|
120 |
+
`human_ref_B`:"Perhaps you should follow the example of Howard Percy Robertson (known as the 'R' in the famous FLRW, or Friedmann-Lematre-Robertson-Walker metric used in physical cosmology.) He was the referee of the famous Einstein-Rosen paper, which was rejected by Physical Review, prompting Einstein never to publish in Physical Review again. Einstein ignored the referee report, but months later, it seems, Robertson had a chance to talk to Einstein and may have helped convince him of the error of his ways. However, as far as we know, he never revealed to Einstein that he was the anonymous referee for Physical Review. It was not until 2005 I believe, long after the death of all participants, that Physical Review chose to disclose the referee's identity (http://physicstoday.scitation.org/doi/full/10.1063/1.2117822).",
|
121 |
+
`labels`:"0",
|
122 |
+
`metadata_A`:"Post URL: https://academia.stackexchange.com/questions/87393, Response URL: https://academia.stackexchange.com/questions/87434, Post author username: Erel Segal-Halevi, Post author profile: https://academia.stackexchange.com/users/787, Response author username: mts, Response author profile: https://academia.stackexchange.com/users/49583",
|
123 |
+
`metadata_B`:"Post URL: https://academia.stackexchange.com/questions/87393, Response URL: https://academia.stackexchange.com/questions/87453, Post author username: Erel Segal-Halevi, Post author profile: https://academia.stackexchange.com/users/787, Response author username: Viktor Toth, Response author profile: https://academia.stackexchange.com/users/7938",
|
124 |
+
`seconds_difference`:"23048.0",
|
125 |
+
`score_ratio`:"2.5",
|
126 |
+
}
|
127 |
+
```
|
128 |
|
129 |
## Dataset Design
|
130 |
|
131 |
### Domain Selection
|
132 |
+
TODO: check if this section is still correct
|
133 |
|
134 |
+
The data is sourced from Reddit and StackExchange, which are both public forums organized into different sub-domains.
|
|
|
135 |
|
136 |
+
SHP-2 contains a train, validation, and test split for comments scraped from each sub-domain. We chose sub-domains based on:
|
137 |
1. whether they were well-known (subscriber count >= 100K)
|
138 |
2. whether posts were expected to pose a question or instruction
|
139 |
3. whether responses were valued based on how *helpful* they were
|
140 |
4. whether comments had to be rooted in some objectivity, instead of being entirely about personal experiences (e.g., `askscience` vs. `AskAmericans`)
|
141 |
|
142 |
The train/validation/test splits were created by splitting the post IDs of a subreddit in 90%/5%/5% proportions respectively, so that no post would appear in multiple splits.
|
143 |
+
Since different posts have different numbers of comments, the number of preferences in each split is not exactly 90%/5%/5%.
|
144 |
+
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
145 |
|
146 |
### Data Selection
|
147 |
+
TODO: check if this section holds for stack
|
148 |
|
149 |
The score of a post/comment is 1 plus the number of upvotes (approvals) it gets from users, minus the number of downvotes (disapprovals) it gets.
|
150 |
+
The value of a score is relative; in domains(posts) with more traffic, there will be more higher-scoring posts(comments).
|
151 |
Within a post, comments posted earlier will tend to have a higher score simply due to having more exposure, which is why using timestamp information is essential when inferring preferences.
|
152 |
|
153 |
Given a post P and two comments (A,B) we only included the preference A > B in the dataset if
|
|
|
164 |
We started with the top-scoring 1000 posts (of all time) and searched for the 25 most similar posts to each one using Reddit's search function to get up to 7500 unique post IDs per subreddit.
|
165 |
|
166 |
|
167 |
+
### Reddit Preprocessing
|
168 |
+
TODO: add stack preprocessing?
|
169 |
|
170 |
We tried to keep preprocessing to a minimum. Subreddit-specific abbreviations were expanded (e.g., "CMV" to "Change my view that").
|
171 |
In hyperlinks, only the referring text was kept and the URL was removed (if the URL was written out, then it was kept).
|
172 |
|
173 |
|
174 |
## Building a Preference Model
|
175 |
+
TODO: train a new model on all data?
|
176 |
|
177 |
### Finetuning
|
178 |
|
|
|
183 |
To avoid this, truncate the post text (in the `history` field) as much as possible, such that the whole input is under 512 tokens (do not truncate the comment(s) however).
|
184 |
If this is still over 512 tokens, simply skip the example.
|
185 |
2. **Use a sufficiently large model.**
|
186 |
+
Finetuning a single FLAN-T5-xl model across [the 385K SHP training data](https://huggingface.co/datasets/stanfordnlp/SHP) should give you a test accuracy between 72-73% (across all domains on examples where the entire input fits within the token limit), ranging from 65-80% on individual subreddits.
|
187 |
3. **Do in-domain prediction.** Out-of-domain performance will be poor if the subreddits are unrelated (e.g., if you fine-tune on `askculinary` preferences and test on `askcarguys` preferences).
|
188 |
4. **Train for fewer epochs.** The InstructGPT paper paper suggests training a reward model for only 1 epoch.
|
189 |
Since the same comment appears in multiple preferences, it is easy to overfit to the data.
|
|
|
191 |
Preferences with a large `score_ratio` (e.g., comment A having 2x the score of comment B) will provide a stronger signal for finetuning the model, so you may only want to consider preferences above a certain `score_ratio`.
|
192 |
The number of preferences per post is Pareto-distributed, so to prevent the model from over-fitting to certain posts, you may want to limit the number of preferences from a particular post.
|
193 |
|
194 |
+
<!-- ### Evaluating
|
195 |
|
196 |
Since it is easier to predict strongly-held preferences than weakly-held ones, instead of reporting a single accuracy value, we recommend reporting a performance curve as a function of the `score_ratio`.
|
197 |
For example, here is the accuracy curve for a FLAN-T5-xl model trained on the askculinary data using the suggestions above.
|
|
|
200 |

|
201 |
|
202 |
We see that finetuning on less -- but higher quality -- data leads to higher accuracies on test data with a score ratio below 3.5, with no real downsides!
|
203 |
+
Note that any examples whose inputs did not fit within the token limit were left out of the experiment, since the model could not be expected to handle them. -->
|
204 |
|
205 |
### SteamSHP - An Open-Source Preference Model
|
206 |
|