reddit_dataset_85 / README.md
immortalizzy's picture
Update README.md with latest statistics
fff08fb verified
metadata
license: mit
multilinguality:
  - multilingual
source_datasets:
  - original
task_categories:
  - text-classification
  - token-classification
  - question-answering
  - summarization
  - text-generation
task_ids:
  - sentiment-analysis
  - topic-classification
  - named-entity-recognition
  - language-modeling
  - text-scoring
  - multi-class-classification
  - multi-label-classification
  - extractive-qa
  - news-articles-summarization

Bittensor Subnet 13 Reddit Dataset

Data-universe: The finest collection of social media data the web has to offer
Data-universe: The finest collection of social media data the web has to offer

Dataset Description

  • Repository: tensorshield/reddit_dataset_85
  • Subnet: Bittensor Subnet 13
  • Miner Hotkey: 5FvjTc3UHpdft9Jusu1hbj87czkqHXTrkLvkBakcWLjZKv1X

Dataset Summary

This dataset is part of the Bittensor Subnet 13 decentralized network, containing preprocessed Reddit data. The data is continuously updated by network miners, providing a real-time stream of Reddit content for various analytical and machine learning tasks. For more information about the dataset, please visit the official repository.

Supported Tasks

The versatility of this dataset allows researchers and data scientists to explore various aspects of social media dynamics and develop innovative applications. Users are encouraged to leverage this data creatively for their specific research or business needs. For example:

  • Sentiment Analysis
  • Topic Modeling
  • Community Analysis
  • Content Categorization

Languages

Primary language: Datasets are mostly English, but can be multilingual due to decentralized ways of creation.

Dataset Structure

Data Instances

Each instance represents a single Reddit post or comment with the following fields:

Data Fields

  • text (string): The main content of the Reddit post or comment.
  • label (string): Sentiment or topic category of the content.
  • dataType (string): Indicates whether the entry is a post or a comment.
  • communityName (string): The name of the subreddit where the content was posted.
  • datetime (string): The date when the content was posted or commented.
  • username_encoded (string): An encoded version of the username to maintain user privacy.
  • url_encoded (string): An encoded version of any URLs included in the content.

Data Splits

This dataset is continuously updated and does not have fixed splits. Users should create their own splits based on their requirements and the data's timestamp.

Dataset Creation

Source Data

Data is collected from public posts and comments on Reddit, adhering to the platform's terms of service and API usage guidelines.

Personal and Sensitive Information

All usernames and URLs are encoded to protect user privacy. The dataset does not intentionally include personal or sensitive information.

Considerations for Using the Data

Social Impact and Biases

Users should be aware of potential biases inherent in Reddit data, including demographic and content biases. This dataset reflects the content and opinions expressed on Reddit and should not be considered a representative sample of the general population.

Limitations

  • Data quality may vary due to the nature of media sources.
  • The dataset may contain noise, spam, or irrelevant content typical of social media platforms.
  • Temporal biases may exist due to real-time collection methods.
  • The dataset is limited to public subreddits and does not include private or restricted communities.

Additional Information

Licensing Information

The dataset is released under the MIT license. The use of this dataset is also subject to Reddit Terms of Use.

Citation Information

If you use this dataset in your research, please cite it as follows:

@misc{tensorshield2025datauniversereddit_dataset_85,
        title={The Data Universe Datasets: The finest collection of social media data the web has to offer},
        author={tensorshield},
        year={2025},
        url={https://huggingface.co/datasets/tensorshield/reddit_dataset_85},
        }

Contributions

To report issues or contribute to the dataset, please contact the miner or use the Bittensor Subnet 13 governance mechanisms.

Dataset Statistics

[This section is automatically updated]

  • Total Instances: 150311
  • Date Range: 2025-03-24T00:00:00Z to 2025-03-24T00:00:00Z
  • Last Updated: 2025-03-31T01:48:18Z

Data Distribution

  • Posts: 12.10%
  • Comments: 87.90%

Top 10 Subreddits

For full statistics, please refer to the stats.json file in the repository.

Rank Topic Total Count Percentage
1 r/CollegeBasketball 5795 3.86%
2 r/AskReddit 3805 2.53%
3 r/AITAH 1907 1.27%
4 r/mildlyinfuriating 1196 0.80%
5 r/90DayFiance 836 0.56%
6 r/denvernuggets 805 0.54%
7 r/tennis 714 0.48%
8 r/politics 710 0.47%
9 r/Advice 687 0.46%
10 r/moviecritic 686 0.46%

Update History

Date New Instances Total Instances
2025-03-31T00:13:38Z 1475 1475
2025-03-31T00:14:35Z 1586 3061
2025-03-31T00:15:21Z 1569 4630
2025-03-31T00:16:19Z 1366 5996
2025-03-31T00:17:15Z 1556 7552
2025-03-31T00:18:33Z 1824 9376
2025-03-31T00:19:21Z 1186 10562
2025-03-31T00:20:19Z 1644 12206
2025-03-31T00:21:24Z 1542 13748
2025-03-31T00:22:14Z 1409 15157
2025-03-31T00:23:19Z 1564 16721
2025-03-31T00:24:14Z 1425 18146
2025-03-31T00:25:32Z 1527 19673
2025-03-31T00:26:28Z 1594 21267
2025-03-31T00:27:19Z 1442 22709
2025-03-31T00:28:22Z 1403 24112
2025-03-31T00:29:30Z 1650 25762
2025-03-31T00:30:22Z 1345 27107
2025-03-31T00:31:16Z 1507 28614
2025-03-31T00:32:16Z 1399 30013
2025-03-31T00:33:21Z 1583 31596
2025-03-31T00:34:21Z 1511 33107
2025-03-31T00:35:19Z 1430 34537
2025-03-31T00:36:12Z 1446 35983
2025-03-31T00:37:16Z 1632 37615
2025-03-31T00:38:16Z 1499 39114
2025-03-31T00:39:15Z 1474 40588
2025-03-31T00:40:14Z 1502 42090
2025-03-31T00:41:13Z 1475 43565
2025-03-31T00:42:14Z 1546 45111
2025-03-31T00:43:13Z 1578 46689
2025-03-31T00:44:14Z 1529 48218
2025-03-31T01:04:21Z 32469 80687
2025-03-31T01:05:13Z 1420 82107
2025-03-31T01:06:14Z 1570 83677
2025-03-31T01:07:13Z 1559 85236
2025-03-31T01:08:16Z 1567 86803
2025-03-31T01:09:13Z 1450 88253
2025-03-31T01:10:15Z 1606 89859
2025-03-31T01:11:15Z 1584 91443
2025-03-31T01:12:15Z 1488 92931
2025-03-31T01:13:15Z 1702 94633
2025-03-31T01:14:50Z 4173 98806
2025-03-31T01:16:52Z 3554 102360
2025-03-31T01:17:43Z 1393 103753
2025-03-31T01:18:22Z 1018 104771
2025-03-31T01:19:14Z 1623 106394
2025-03-31T01:20:16Z 1759 108153
2025-03-31T01:21:16Z 1562 109715
2025-03-31T01:22:13Z 1418 111133
2025-03-31T01:24:24Z 3231 114364
2025-03-31T01:25:21Z 1440 115804
2025-03-31T01:26:15Z 1306 117110
2025-03-31T01:27:12Z 1449 118559
2025-03-31T01:28:12Z 1450 120009
2025-03-31T01:29:11Z 1475 121484
2025-03-31T01:30:11Z 1537 123021
2025-03-31T01:31:12Z 1530 124551
2025-03-31T01:32:12Z 1497 126048
2025-03-31T01:33:11Z 1487 127535
2025-03-31T01:34:13Z 1445 128980
2025-03-31T01:35:12Z 1492 130472
2025-03-31T01:36:25Z 1834 132306
2025-03-31T01:37:14Z 1204 133510
2025-03-31T01:38:16Z 1563 135073
2025-03-31T01:39:14Z 1413 136486
2025-03-31T01:40:27Z 1835 138321
2025-03-31T01:41:14Z 1240 139561
2025-03-31T01:42:19Z 1535 141096
2025-03-31T01:44:01Z 2641 143737
2025-03-31T01:44:35Z 859 144596
2025-03-31T01:45:15Z 977 145573
2025-03-31T01:46:30Z 1728 147301
2025-03-31T01:47:15Z 1397 148698
2025-03-31T01:48:18Z 1613 150311