massive_ru / README.md
voorhs's picture
Update README.md
0486d7d verified
metadata
dataset_info:
  - config_name: default
    features:
      - name: utterance
        dtype: string
      - name: label
        dtype: int64
    splits:
      - name: train
        num_bytes: 923371
        num_examples: 11492
      - name: validation
        num_bytes: 162616
        num_examples: 2031
      - name: test
        num_bytes: 235839
        num_examples: 2968
    download_size: 564588
    dataset_size: 1321826
  - config_name: intents
    features:
      - name: id
        dtype: int64
      - name: name
        dtype: string
      - name: tags
        sequence: 'null'
      - name: regexp_full_match
        sequence: 'null'
      - name: regexp_partial_match
        sequence: 'null'
      - name: description
        dtype: 'null'
    splits:
      - name: intents
        num_bytes: 2187
        num_examples: 58
    download_size: 3921
    dataset_size: 2187
configs:
  - config_name: default
    data_files:
      - split: train
        path: data/train-*
      - split: validation
        path: data/validation-*
      - split: test
        path: data/test-*
  - config_name: intents
    data_files:
      - split: intents
        path: intents/intents-*
task_categories:
  - text-classification
language:
  - ru

Russian massive

This is a text classification dataset. It is intended for machine learning research and experimentation.

This dataset is obtained via formatting another publicly available data to be compatible with our AutoIntent Library.

Usage

It is intended to be used with our AutoIntent Library:

from autointent import Dataset

massive_ru = Dataset.from_datasets("AutoIntent/massive_ru")

Source

This dataset is taken from mteb/amazon_massive_intent and formatted with our AutoIntent Library:

from datasets import Dataset as HFDataset
from datasets import load_dataset

from autointent import Dataset
from autointent.schemas import Intent, Sample


def extract_intents_info(split: HFDataset) -> tuple[list[Intent], dict[str, int]]:
    """Extract metadata."""
    intent_names = sorted(split.unique("label"))
    intent_names.remove("cooking_query")
    intent_names.remove("audio_volume_other")
    n_classes = len(intent_names)
    name_to_id = dict(zip(intent_names, range(n_classes), strict=False))
    intents_data = [Intent(id=i, name=intent_names[i]) for i in range(n_classes)]
    return intents_data, name_to_id


def convert_massive(split: HFDataset, name_to_id: dict[str, int]) -> list[Sample]:
    """Extract utterances and labels."""
    return [Sample(utterance=s["text"], label=name_to_id[s["label"]]) for s in split if s["label"] in name_to_id]


if __name__ == "__main__":
    massive = load_dataset("mteb/amazon_massive_intent", "ru")
    intents, name_to_id = extract_intents_info(massive["train"])
    train_samples = convert_massive(massive["train"], name_to_id)
    test_samples = convert_massive(massive["test"], name_to_id)
    validation_samples = convert_massive(massive["validation"], name_to_id)
    dataset = Dataset.from_dict(
        {"intents": intents, "train": train_samples, "test": test_samples, "validation": validation_samples}
    )