File size: 3,030 Bytes
ab47129
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
---

dataset_info:
  features:
  - name: image
    dtype: image
  - name: label
    dtype:
      class_label:
        names:
          '0': Sample001
          '1': Sample002
          '2': Sample003
          '3': Sample004
          '4': Sample005
          '5': Sample006
          '6': Sample007
          '7': Sample008
          '8': Sample009
          '9': Sample010
          '10': Sample011
          '11': Sample012
          '12': Sample013
          '13': Sample014
          '14': Sample015
          '15': Sample016
          '16': Sample017
          '17': Sample018
          '18': Sample019
          '19': Sample020
          '20': Sample021
          '21': Sample022
          '22': Sample023
          '23': Sample024
          '24': Sample025
          '25': Sample026
          '26': Sample027
          '27': Sample028
          '28': Sample029
          '29': Sample030
          '30': Sample031
          '31': Sample032
          '32': Sample033
          '33': Sample034
          '34': Sample035
          '35': Sample036
          '36': Sample037
          '37': Sample038
          '38': Sample039
          '39': Sample040
          '40': Sample041
          '41': Sample042
          '42': Sample043
          '43': Sample044
          '44': Sample045
          '45': Sample046
          '46': Sample047
          '47': Sample048
          '48': Sample049
          '49': Sample050
          '50': Sample051
          '51': Sample052
          '52': Sample053
          '53': Sample054
          '54': Sample055
          '55': Sample056
          '56': Sample057
          '57': Sample058
          '58': Sample059
          '59': Sample060
          '60': Sample061
          '61': Sample062
  splits:
  - name: train
    num_bytes: 73956435.184
    num_examples: 6136
  - name: validation
    num_bytes: 18902272.208
    num_examples: 1564
  download_size: 95517369
  dataset_size: 92858707.392
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: validation
    path: data/validation-*
task_categories:
- image-classification
language:
- en
pretty_name: Chars74k
size_categories:
- 1K<n<10K
---


## Chars74k

The "Good" subset of the "English" subset of the Chars74k dataset split into training and validation sets.

The validation set was created to match the label distribution of the training set.

62 classes (0-9, A-Z, a-z)

Dataset page: https://teodecampos.github.io/chars74k/

Paper describing the dataset:   
https://www.semanticscholar.org/paper/Character-Recognition-in-Natural-Images-Campos-Babu/dbbd5fdc09349bbfdee7aa7365a9d37716852b32

5 images where removed due to poor quality.

Lable distribution:

![image/png](https://cdn-uploads.huggingface.co/production/uploads/62e26d05fb4a692673b3569a/ZvllFAQgS09s8pSrIvf4N.png)


![image/png](https://cdn-uploads.huggingface.co/production/uploads/62e26d05fb4a692673b3569a/TtC_gAK12lUMsNWdHRel7.png)