File size: 1,307 Bytes
2a6480d
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
c453e17
 
2a6480d
c453e17
 
2a6480d
c453e17
 
 
 
22c125b
 
 
 
 
2a6480d
 
 
07674f1
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
---
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
  - split: validation
    path: data/validation-*
dataset_info:
  features:
  - name: word
    dtype: string
  - name: language
    dtype: string
  - name: input_ids
    sequence: int32
  - name: attention_mask
    sequence: int8
  - name: special_tokens_mask
    sequence: int8
  - name: tokens
    sequence: string
  splits:
  - name: train
    num_bytes: 5310458
    num_examples: 37849
  - name: test
    num_bytes: 1981786
    num_examples: 14123
  - name: validation
    num_bytes: 2614514
    num_examples: 18643
  download_size: 2205128
  dataset_size: 9906758
license: mit
task_categories:
- text-classification
language:
- en
---
# Dataset Card for "english_char_split"

This is a dataset of English words which have been tokenised by character.

It was originally used to train a RoBERTa model from scratch on the masked language modelling task where during training, characters were randomly masked.

This was ultimately used in an anomaly detection task where the embeddings from the trained model were used to detect non-English words - see full example [here](https://github.com/datasig-ac-uk/signature_applications/tree/master/anomaly_detection_language_dataset).