AlekseyScorpi commited on
Commit
6be4adf
·
1 Parent(s): b882bd2

Update README.md

Browse files
Files changed (1) hide show
  1. README.md +10 -1
README.md CHANGED
@@ -29,11 +29,20 @@ dataset_info:
29
  num_bytes: 1893804579.79
30
  num_examples: 1987
31
  - name: test
32
- num_bytes: 374568135.0
33
  num_examples: 339
34
  download_size: 2423302965
35
  dataset_size: 2268372714.79
 
 
 
 
 
 
36
  ---
37
  # Dataset Card for "docs_on_several_languages"
 
 
 
38
 
39
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
 
29
  num_bytes: 1893804579.79
30
  num_examples: 1987
31
  - name: test
32
+ num_bytes: 374568135
33
  num_examples: 339
34
  download_size: 2423302965
35
  dataset_size: 2268372714.79
36
+ task_categories:
37
+ - text-classification
38
+ tags:
39
+ - code
40
+ size_categories:
41
+ - 1K<n<10K
42
  ---
43
  # Dataset Card for "docs_on_several_languages"
44
+ This dataset is a collection of different images in different languages.
45
+ The set includes the following languages: Azerbaijani, Belorussian, Chinese, English, Estonian, Finnish, Georgian, Japanese, Korean, Kazakh, Latvian, Lithuanian, Mongolian, Norwegian, Polish, Russian, Ukranian.
46
+ Each language has a corresponding class label defined. At least 100 images in the entire dataset are allocated per class. This dataset was originally used for the task of classifying the language of a document based on its image, but I hope it can help you in other machine learning tasks.
47
 
48
  [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)