Datasets:

Languages:
English
Size:
n<1K
Tags:
Not-For-All-Audiences
License:
The Dataset Viewer has been disabled on this dataset.

Herrsimian

Son Biten - crop - edited

Herrsimian is a tiny (121 samples), long-context (up to about 52k tokens) NSFW conversational dataset containing mainly data from a certain expert roleplayer (let's call him Simian) who used to actively participate on a few different forums until the end of 2022. It was used in Llama-3.1-Herrsimian-8B, although results weren't great there since the learning rate was probably too high and conversations alone don't make for a good RP model.

The roleplays are mostly in book/novel style, with narration in past tense and third person perspective, as well as quote mark-delimited dialogue lines. Markdown-style roleplay has not been included due to lack of data.

☢️ Warning: the dataset is almost entirely composed of highly questionable content.

Updates

  • 2025-03-10 - Significant revision
    • Removed all non-Simian RP (celebrity interviews and roleplays where Simian didn't participate) and modified the dataset so that all assistant responses are from Simian, which should make it more effective to train models on that writing style with masking. I will separately upload the previously included samples at a later time.
    • Added a few more Simian-related RP that I previously processed for ShoriRP but forgot to add in Herrsimian.
    • Changed a few "conversation generation" samples into regular conversations.
    • Total sample number decreased to 121 samples + 2 discarded samples used as eval.
  • 2024-09-06 - Added sample title/personal notes in the dataset. Should help with filtering and sorting.
  • 2024-09-05 - Last samples and evals added, 131 samples in total
  • 2024-09-04 - Added a few more samples, 100 in total now
  • 2024-09-03 - First version uploaded on HF

General overview of the dataset

Compatibility

Note that the dataset is not in a standard ShareGPT format. It has a separate name field for character names, and user/assistant turns do not alternate like you would normally expect. Further processing with Python code will be needed to adapt the dataset for your purposes.

Composition

All samples are composed of an initial backtranslated instruction defining scenario, backstory (if applicable), characters, task, and then a manually curated, fully segmented conversation with user and assistant talking in turns for their own characters, narration or OOC. Usernames have been removed; only character names remain.

Design quirks

An intentional design quirk of this dataset is that the conversations are multicharacter. Either the user or the model may play the role of more than one character, and user/model turns may not necessarily alternate, unlike what normally happens in most other datasets and as required in many cases for proper training. This can make the dataset incompatible with certain pipelines (e.g. where user/assistant turns must alternate for masking to work properly); additional processing will be needed to correct this.

A second notable intentional design quirk is that message length is highly variable, ranging from a few to several hundred tokens length, although on average they will be around the 150 tokens range (estimated). The idea is that the model should be able to learn when to naturally use short or long messages, and not just focus on one specific length. Dataset samples never contain long sections with very short messages, in any case.

A third difference from most datasets is that oftentimes two or more characters may speak or act simultanously. This is rendered by joining more character names together with an ampersand in the form of Char1 & Char2 & CharN, similarly to what happens in the script of some Japanese visual novels.

Additionally, characters may occasionally change name; this usually happens their name gets revealed in the story. In this case, for one message the character name is transitionally rendered in the form of Oldname (Newname), with subsequent messages continuing with Newname.

The initial backtranslated instruction intentionally doesn't follow a fixed formatting, but it usually includes at least the description of the characters, title and scenario (summary of the events that will happen in the roleplay).

Dataset fields

Metadata

Field Description
label A short name that may help sorting/processing the various roleplaying thread segments.
title Either the name of the roleplaying thread, or a name that I gave them.
simian True or False, indicates if Simian participates in the thread. In the current data version, it's always True.
quality A subjective general thread/writing quality indicator. Can be "low", "mid" or "good".
date-start The date of the opening post in the thread segment/conversation. Simian's writing quality improved over time, and wasn't too good before year 2015.
notes Miscellaneous notes that I might have added for various reasons.
changes In some roleplays I changed names to mitigate memorization of very repetitive data. When present, it's a dictionary of key:values as "original name":"new name"

Conversation

Field Description
role Simian's messages are always the assistant; user is for the other participant. The first role in all roleplays is system.
name The name of the character acting or speaking was also included. In its absence, it can be assumed that it's either the assistant or the user talking to each other. A limited amount of effort was put to randomize names when they were used too frequently, although more work needs to be done in this regard.

OOC messages have been given either the user or assistant role depending on the context, but never a name.
content The message or utterance. For roleplay, generally it's in typical book/forum style, with narration in third person and past tense and dialogue lines delimited by ASCII quote marks.

Finetuning suggestions

Given the tiny number of samples, ordinary finetuning strategies intended for large amounts of data won't work well. The dataset was primarily intended to give one voice to the model via sensible overfitting. With Llama-3.1-Herrsimian-8B I used 5 training epochs via LoRA finetuning.

Nowadays I would recommend to use a very low learning rate and no less than 10-15 epochs so that overfitting occurs without causing significant forgetting of the model's capabilities.

To limit the horniness of the trained model it might be beneficial to clip the conversations to whatever fits the training context size and not reuse the rest, since most of the time NSFW scenes do not begin right away in the various scenes.

Most samples are from a few very long roleplays and they could be limited in number in order to avoid adding too much of the same content (which might promote hallucinations).

Dataset statistics

Summary

  • 123 examples (121 train + 2 eval)
  • Total message number: 17,996
  • Total message byte: 9,993,298 (message content only)

Message length distribution

Length statistics (old statistics in LLama-3 tokens; to be recomputed at some point but they should still be representative of the dataset)

Downloads last month
16

Models trained or fine-tuned on lemonilia/Herrsimian

Collection including lemonilia/Herrsimian