Datasets:
The dataset viewer is not available for this dataset.
Error code: ConfigNamesError Exception: AttributeError Message: 'int' object has no attribute 'get' Traceback: Traceback (most recent call last): File "/src/services/worker/src/worker/job_runners/dataset/config_names.py", line 66, in compute_config_names_response config_names = get_dataset_config_names( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/inspect.py", line 164, in get_dataset_config_names dataset_module = dataset_module_factory( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1731, in dataset_module_factory raise e1 from None File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1688, in dataset_module_factory return HubDatasetModuleFactoryWithoutScript( File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/load.py", line 1040, in get_module dataset_infos = DatasetInfosDict.from_dataset_card_data(dataset_card_data) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 386, in from_dataset_card_data dataset_info = DatasetInfo._from_yaml_dict(dataset_card_data["dataset_info"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/info.py", line 319, in _from_yaml_dict yaml_data["splits"] = SplitDict._from_yaml_list(yaml_data["splits"]) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/splits.py", line 598, in _from_yaml_list return cls.from_split_dict(yaml_data) File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/splits.py", line 563, in from_split_dict dataset_name = split_infos[0].get("dataset_name") if split_infos else None AttributeError: 'int' object has no attribute 'get'
Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.
CC-Meta25-1M
Overview
The CC-Meta25-1M dataset comprises 1,000,341 tuples of web metadata (URL, title, snippet, language) extracted via uniform random sampling from the Common Crawl February 2025 dataset (CC-MAIN-2025-08. Released under version 1.0.0, it captures a representative snapshot of web content from early 2025, retaining original snippet text including boilerplate such as cookie notices and navigation elements for authenticity. This dataset is intended for researchers and practitioners exploring web metadata, linguistic distributions, or temporal web trends.
Dataset Specifications
- Source: Common Crawl CC-MAIN-2025-08
- Size: 1,000,341 records, [323.3 MB] (Parquet format)
- Sampling: Uniform random selection from the February 2025 crawl
- Features:
url
: Web page URL (string; "N/A" if invalid or missing)title
: Page title (string; "N/A" if missing)snippet
: Text excerpt from the page (string; includes boilerplate, "N/A" if missing)language
: ISO 639-1 language code (string; e.g., "en", "N/A" if missing)
- Processing:
- Missing values in all fields replaced with "N/A".
- Duplicates preserved to mirror the raw crawl (0.01% duplicated).
Accessing the Dataset
Hosted on Hugging Face, the dataset can be accessed as follows:
from datasets import load_dataset
dataset = load_dataset("tshasan/cc-meta25-1m")
print(dataset[0])
- Downloads last month
- 55