Datasets:
Tasks:
Text Classification
Modalities:
Text
Formats:
parquet
Languages:
English
Size:
10K - 100K
License:
add info in readme
Browse files
README.md
CHANGED
@@ -37,4 +37,8 @@ dataset_info:
|
|
37 |
---
|
38 |
# Dataset Card for "english_char_split"
|
39 |
|
40 |
-
|
|
|
|
|
|
|
|
|
|
37 |
---
|
38 |
# Dataset Card for "english_char_split"
|
39 |
|
40 |
+
This is a dataset of English words which have been tokenised by character.
|
41 |
+
|
42 |
+
It was originally used to train a RoBERTa model from scratch on the masked language modelling task where during training, characters were randomly masked.
|
43 |
+
|
44 |
+
This was ultimately used in an anomaly detection task where the embeddings from the trained model were used to detect non-English words - see full example [here](https://github.com/datasig-ac-uk/signature_applications/tree/master/anomaly_detection_language_dataset).
|