File size: 2,399 Bytes
177924c e3425df 9a131c2 e3425df |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 |
---
dataset_info:
features:
- name: messages
list:
- name: content
dtype: string
- name: role
dtype: string
- name: category
dtype: string
- name: prompt_id
dtype: string
splits:
- name: train
num_bytes: 222353
num_examples: 400
download_size: 139530
dataset_size: 222353
configs:
- config_name: default
data_files:
- split: train
path: data/train-*
---
# Dataset Card for "no_robots_test400"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
This is a subset of "no_robots", selecting 400 questions from the test set.
| category | messages |
|:-----------|-----------:|
| Brainstorm | 36 |
| Chat | 101 |
| Classify | 16 |
| Closed QA | 15 |
| Coding | 16 |
| Extract | 7 |
| Generation | 129 |
| Open QA | 34 |
| Rewrite | 21 |
| Summarize | 25 |
Code:
```python
import pandas as pd
import numpy as np
import numpy.random
from datasets import load_dataset, Dataset
from copy import deepcopy
def get_norobot_dataset():
ds = load_dataset('HuggingFaceH4/no_robots')
all_test_data = []
for sample in ds['test_sft']:
sample: dict
for i, message in enumerate(sample['messages']):
if message['role'] == 'user':
item = dict(
messages=deepcopy(sample['messages'][:i + 1]),
category=sample['category'],
prompt_id=sample['prompt_id'],
)
all_test_data.append(item)
return Dataset.from_list(all_test_data)
dataset = get_norobot_dataset().to_pandas()
dataset.groupby('category').count()
dataset['_sort_key'] = dataset['messages'].map(str)
dataset = dataset.sort_values(['_sort_key'])
subset = []
for category, group_df in sorted(dataset.groupby('category')):
n = int(len(group_df) * 0.603)
if n <= 20:
n = len(group_df)
indices = np.random.default_rng(seed=42).choice(len(group_df), size=n, replace=False)
subset.append(group_df.iloc[indices])
df = pd.concat(subset)
df = df.drop(columns=['_sort_key'])
df = df.reset_index(drop=True)
print(len(df))
print(df.groupby('category').count().to_string())
Dataset.from_pandas(df).push_to_hub('yujiepan/no_robots_test400')
```
|