Datasets:

Modalities:
Text
Formats:
json
Languages:
English
Libraries:
Datasets
pandas
License:
File size: 495 Bytes
9801ceb
 
 
 
 
669b6d7
0d0139a
669b6d7
134a3dd
3468c81
 
0d0139a
4e8ca21
1
2
3
4
5
6
7
8
9
10
11
12
13
---
license: other
language:
- en
---
# Baseline set sourced from: https://huggingface.co/datasets/nbeerbower/reddit-dpo

# (Reddit - Comment - Style - Instruct)-v1
## Converted to ShareGPT. Removed errors / rejections / over-prevalent Ngrams. Deduplicated:(string match / min-hash). Deslopped:(classifier model / string match), Spelling and grammar corrected.

## Dataset tool used: [Here](https://github.com/The-Chaotic-Neutrals/ShareGPT-Formaxxing)

### To do, filter /remove refusals if any.