rchan26 commited on
Commit
07674f1
·
verified ·
1 Parent(s): 73e4147

add info in readme

Browse files
Files changed (1) hide show
  1. README.md +5 -1
README.md CHANGED
@@ -37,4 +37,8 @@ dataset_info:
37
  ---
38
  # Dataset Card for "english_char_split"
39
 
40
- [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
 
 
 
 
37
  ---
38
  # Dataset Card for "english_char_split"
39
 
40
+ This is a dataset of English words which have been tokenised by character.
41
+
42
+ It was originally used to train a RoBERTa model from scratch on the masked language modelling task where during training, characters were randomly masked.
43
+
44
+ This was ultimately used in an anomaly detection task where the embeddings from the trained model were used to detect non-English words - see full example [here](https://github.com/datasig-ac-uk/signature_applications/tree/master/anomaly_detection_language_dataset).