File size: 1,299 Bytes
90c1785
4b2a326
 
 
 
 
 
 
90c1785
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
3a84908
90c1785
 
3a84908
90c1785
3a84908
 
4b2a326
 
 
 
 
 
 
90c1785
1a2a294
 
 
 
 
 
 
3d06504
1a2a294
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
---
language:
- fr
size_categories:
- 100K<n<1M
task_categories:
- token-classification
pretty_name: wikiner_fr
dataset_info:
  features:
  - name: id
    dtype: int64
  - name: tokens
    sequence: string
  - name: ner_tags
    sequence:
      class_label:
        names:
          '0': O
          '1': LOC
          '2': PER
          '3': MISC
          '4': ORG
  splits:
  - name: train
    num_bytes: 54139057
    num_examples: 120060
  - name: test
    num_bytes: 5952227
    num_examples: 13393
  download_size: 15572314
  dataset_size: 60091284
configs:
- config_name: default
  data_files:
  - split: train
    path: data/train-*
  - split: test
    path: data/test-*
---

# Dataset Card for "wikiner_fr_mixed_caps"

This is an update on the dataset [Jean-Baptiste/wikiner_fr](https://huggingface.co/datasets/Jean-Baptiste/wikiner_fr) with:
 - removal of duplicated examples and leakage
 - random de-capitalization of words (20%)

You can see the code to create the changes in the script `update_dataset.py` in the repository.

Dataset Description (reproduced from original repo):

- **Homepage:** https://metatext.io/datasets/wikiner
- **Repository:** 
- **Paper:** https://www.sciencedirect.com/science/article/pii/S0004370212000276?via%3Dihub
- **Leaderboard:**
- **Point of Contact:**