File size: 3,290 Bytes
654c368
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
# Notes

These are general notes of the entire process from processing to semi-final results.

# Filtering Steps

## Unpacking (RedditUnpack.py)

1. Run `python RedditUnpack.py process <submissions/comments> <ZST folder> <Output Prefix>` to process submissions and comments respectively.
 - This unpacks submissions and comments into jsonl files. Expanding them to ~8TiB uncompressed. So be sure to have plenty of storage or some sort of disk compression.

## Filtering Subreddits (RedditSubredditIndex.py)

We don't want really empty subreddits so we prune subreddits by subscriber count first.

2. Run `python RedditSubredditIndex.py preindex <Submissions Output Folder>`
 - This is a very simple counter to get the subscriber counts for each subreddit for each month.
 - This additionally creates a folder called `IndexSubReddit` as a scratch folder for indexing subreddits.

3. Run `python RedditSubredditIndex.py combine`
 - Combines the multiple jsonl files created into a seperate process into 1 massive file.
 - Deduplication via subreddit name matching.
 - Fixes some weird inconsistencies in pushshift dumps.
 - Get's the highest subscriber count seen. Ignores a decrease in subscriber counts.
   - Could be suspectable to sub manipulation, but you need a lot of user accounts to spam for it.

4. (Optional) `python RedditSubredditIndex.py percentile` to get a printout of percentiles: [95th, 90th, 75th, 50th, 25th, 10th, 5th].
 - Not required step but a good way to see which subscriber counts is best suited for your usecase.

5. `python RedditSubredditIndex.py selection <Minimum Subscribers>` to create `sub_selects.jsonl`.
 - Writes a file called `sub_selects.jsonl` containing a jsonl list of selected subreddits.

6. `python RedditSubredditIndex.py filter-folder <Comment/Submission Folder> <Output Folder> <"Submission"/"Comments" (Mode)>`
 - This is the folder known internally as subreddits_M700 / M700 Folder.
 - Point the `<Output Folder>` to the same folder.
 - This will raise an exception when a wrong folder is selected (Comments when the Mode is not selected as Comments, etc.)
 - Don't open the output folder in vscode since vscode will just die trying to see all the jsonl files.

# Roughly Filtering (RedditSubredditIndex.py)

Even after filtering by subreddit count, we are still left with subreddits which are mostly "Empty" or with a lot of spam.

7. `RedditScoring.py compute-scores <M700 Folder>` computes the some statistics for both submissions:
 - QScores such as Engagement, Richness and Diversity
 - Unique Submission/Comment Authors and submission/comment counts
8. Run `RedditScoring.py merge-stats <M700 Folder> <Merged Stats Jsonl>`
 - Merge jsonl files into 1 bigger consolidated stats file.
9. `RedditScoring.py makefilter StatsM700.jsonl <Selection-OUTPUT> --mode <text/media>`
 - 

# Rewrite and Merge submissions, comments into conversations (RedditThreader.py)

10. Run `RedditThreader.py folder subreddits_M700 <OUTPUT_FOLDER> <Selection-OUTPUT>`
 - Rethreads submissions and comments into threads
 - Thread & a bit of additional subreddit filtering is applied.

# Uploading to HF

11. Run `RedditChunking.py`
 - Very quick script to chunk all subreddits into jsonl chunks of 10GiB each for upload
12. Upload Chunks
 - Uploaded the files.