parquet-converter commited on
Commit
dc1a011
·
1 Parent(s): 04f959c

Update parquet files

Browse files
README.dataset.txt DELETED
@@ -1,6 +0,0 @@
1
- # Defects > Set_4
2
- https://universe.roboflow.com/diplom-qz7q6/defects-2q87r
3
-
4
- Provided by a Roboflow user
5
- License: CC BY 4.0
6
-
 
 
 
 
 
 
 
README.md DELETED
@@ -1,92 +0,0 @@
1
- ---
2
- task_categories:
3
- - image-segmentation
4
- tags:
5
- - roboflow
6
- - roboflow2huggingface
7
-
8
- ---
9
-
10
- <div align="center">
11
- <img width="640" alt="keremberke/pcb-defect-segmentation" src="https://huggingface.co/datasets/keremberke/pcb-defect-segmentation/resolve/main/thumbnail.jpg">
12
- </div>
13
-
14
- ### Dataset Labels
15
-
16
- ```
17
- ['dry_joint', 'incorrect_installation', 'pcb_damage', 'short_circuit']
18
- ```
19
-
20
-
21
- ### Number of Images
22
-
23
- ```json
24
- {'valid': 25, 'train': 128, 'test': 36}
25
- ```
26
-
27
-
28
- ### How to Use
29
-
30
- - Install [datasets](https://pypi.org/project/datasets/):
31
-
32
- ```bash
33
- pip install datasets
34
- ```
35
-
36
- - Load the dataset:
37
-
38
- ```python
39
- from datasets import load_dataset
40
-
41
- ds = load_dataset("keremberke/pcb-defect-segmentation", name="full")
42
- example = ds['train'][0]
43
- ```
44
-
45
- ### Roboflow Dataset Page
46
- [https://universe.roboflow.com/diplom-qz7q6/defects-2q87r/dataset/8](https://universe.roboflow.com/diplom-qz7q6/defects-2q87r/dataset/8?ref=roboflow2huggingface)
47
-
48
- ### Citation
49
-
50
- ```
51
- @misc{ defects-2q87r_dataset,
52
- title = { Defects Dataset },
53
- type = { Open Source Dataset },
54
- author = { Diplom },
55
- howpublished = { \\url{ https://universe.roboflow.com/diplom-qz7q6/defects-2q87r } },
56
- url = { https://universe.roboflow.com/diplom-qz7q6/defects-2q87r },
57
- journal = { Roboflow Universe },
58
- publisher = { Roboflow },
59
- year = { 2023 },
60
- month = { jan },
61
- note = { visited on 2023-01-27 },
62
- }
63
- ```
64
-
65
- ### License
66
- CC BY 4.0
67
-
68
- ### Dataset Summary
69
- This dataset was exported via roboflow.com on January 27, 2023 at 1:45 PM GMT
70
-
71
- Roboflow is an end-to-end computer vision platform that helps you
72
- * collaborate with your team on computer vision projects
73
- * collect & organize images
74
- * understand and search unstructured image data
75
- * annotate, and create datasets
76
- * export, train, and deploy computer vision models
77
- * use active learning to improve your dataset over time
78
-
79
- For state of the art Computer Vision training notebooks you can use with this dataset,
80
- visit https://github.com/roboflow/notebooks
81
-
82
- To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
83
-
84
- The dataset includes 189 images.
85
- Defect are annotated in COCO format.
86
-
87
- The following pre-processing was applied to each image:
88
-
89
- No image augmentation techniques were applied.
90
-
91
-
92
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
README.roboflow.txt DELETED
@@ -1,27 +0,0 @@
1
-
2
- Defects - v8 Set_4
3
- ==============================
4
-
5
- This dataset was exported via roboflow.com on January 27, 2023 at 1:45 PM GMT
6
-
7
- Roboflow is an end-to-end computer vision platform that helps you
8
- * collaborate with your team on computer vision projects
9
- * collect & organize images
10
- * understand and search unstructured image data
11
- * annotate, and create datasets
12
- * export, train, and deploy computer vision models
13
- * use active learning to improve your dataset over time
14
-
15
- For state of the art Computer Vision training notebooks you can use with this dataset,
16
- visit https://github.com/roboflow/notebooks
17
-
18
- To find over 100k other datasets and pre-trained models, visit https://universe.roboflow.com
19
-
20
- The dataset includes 189 images.
21
- Defect are annotated in COCO format.
22
-
23
- The following pre-processing was applied to each image:
24
-
25
- No image augmentation techniques were applied.
26
-
27
-
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
data/train.zip → full/pcb-defect-segmentation-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:8408b23e4f5a3b07814680022795ab0c2c87e11f271e7359425ab705b5d0e66f
3
- size 6411968
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:d0cbad7ede84bd4b05e5217aabfe2285fe52b515edad54ac4eade3c5d584dfde
3
+ size 1731899
data/valid.zip → full/pcb-defect-segmentation-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:abd5cf5ff9e276f2362f34418b82c7c251dd655af70a9e1a6d8b2f5ab0c8461d
3
- size 1278204
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:4223debff24b7efdf9f5735d2859a774000668364cb93ffaabd1f24891038be3
3
+ size 6441123
data/test.zip → full/pcb-defect-segmentation-validation.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:fcbbfde72b63afe9883ea2240b23cd2a3eede24dab3ea150a28067c0c8bf653f
3
- size 1719625
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:14f7f82507e4c7e97f8894d76a534399f925889a7a2e0774d4a43129823152f1
3
+ size 1287330
thumbnail.jpg → mini/pcb-defect-segmentation-test.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:2f3e5d2232de8ca2fa4d1933e89c92e1389aaece52e84bf8d2065434ddb75b6a
3
- size 158753
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67c0960cf9638ec7ab6b471855fdb4566cb6bb3cf463eb27ea2a6200497f02db
3
+ size 160934
data/valid-mini.zip → mini/pcb-defect-segmentation-train.parquet RENAMED
@@ -1,3 +1,3 @@
1
  version https://git-lfs.github.com/spec/v1
2
- oid sha256:cf8c4a5792c92130109ee55eef43d2fcda6f3c66a990ef285ae1b54aae764c47
3
- size 154907
 
1
  version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67c0960cf9638ec7ab6b471855fdb4566cb6bb3cf463eb27ea2a6200497f02db
3
+ size 160934
mini/pcb-defect-segmentation-validation.parquet ADDED
@@ -0,0 +1,3 @@
 
 
 
 
1
+ version https://git-lfs.github.com/spec/v1
2
+ oid sha256:67c0960cf9638ec7ab6b471855fdb4566cb6bb3cf463eb27ea2a6200497f02db
3
+ size 160934
pcb-defect-segmentation.py DELETED
@@ -1,154 +0,0 @@
1
- import collections
2
- import json
3
- import os
4
-
5
- import datasets
6
-
7
-
8
- _HOMEPAGE = "https://universe.roboflow.com/diplom-qz7q6/defects-2q87r/dataset/8"
9
- _LICENSE = "CC BY 4.0"
10
- _CITATION = """\
11
- @misc{ defects-2q87r_dataset,
12
- title = { Defects Dataset },
13
- type = { Open Source Dataset },
14
- author = { Diplom },
15
- howpublished = { \\url{ https://universe.roboflow.com/diplom-qz7q6/defects-2q87r } },
16
- url = { https://universe.roboflow.com/diplom-qz7q6/defects-2q87r },
17
- journal = { Roboflow Universe },
18
- publisher = { Roboflow },
19
- year = { 2023 },
20
- month = { jan },
21
- note = { visited on 2023-01-27 },
22
- }
23
- """
24
- _CATEGORIES = ['dry_joint', 'incorrect_installation', 'pcb_damage', 'short_circuit']
25
- _ANNOTATION_FILENAME = "_annotations.coco.json"
26
-
27
-
28
- class PCBDEFECTSEGMENTATIONConfig(datasets.BuilderConfig):
29
- """Builder Config for pcb-defect-segmentation"""
30
-
31
- def __init__(self, data_urls, **kwargs):
32
- """
33
- BuilderConfig for pcb-defect-segmentation.
34
-
35
- Args:
36
- data_urls: `dict`, name to url to download the zip file from.
37
- **kwargs: keyword arguments forwarded to super.
38
- """
39
- super(PCBDEFECTSEGMENTATIONConfig, self).__init__(version=datasets.Version("1.0.0"), **kwargs)
40
- self.data_urls = data_urls
41
-
42
-
43
- class PCBDEFECTSEGMENTATION(datasets.GeneratorBasedBuilder):
44
- """pcb-defect-segmentation instance segmentation dataset"""
45
-
46
- VERSION = datasets.Version("1.0.0")
47
- BUILDER_CONFIGS = [
48
- PCBDEFECTSEGMENTATIONConfig(
49
- name="full",
50
- description="Full version of pcb-defect-segmentation dataset.",
51
- data_urls={
52
- "train": "https://huggingface.co/datasets/keremberke/pcb-defect-segmentation/resolve/main/data/train.zip",
53
- "validation": "https://huggingface.co/datasets/keremberke/pcb-defect-segmentation/resolve/main/data/valid.zip",
54
- "test": "https://huggingface.co/datasets/keremberke/pcb-defect-segmentation/resolve/main/data/test.zip",
55
- },
56
- ),
57
- PCBDEFECTSEGMENTATIONConfig(
58
- name="mini",
59
- description="Mini version of pcb-defect-segmentation dataset.",
60
- data_urls={
61
- "train": "https://huggingface.co/datasets/keremberke/pcb-defect-segmentation/resolve/main/data/valid-mini.zip",
62
- "validation": "https://huggingface.co/datasets/keremberke/pcb-defect-segmentation/resolve/main/data/valid-mini.zip",
63
- "test": "https://huggingface.co/datasets/keremberke/pcb-defect-segmentation/resolve/main/data/valid-mini.zip",
64
- },
65
- )
66
- ]
67
-
68
- def _info(self):
69
- features = datasets.Features(
70
- {
71
- "image_id": datasets.Value("int64"),
72
- "image": datasets.Image(),
73
- "width": datasets.Value("int32"),
74
- "height": datasets.Value("int32"),
75
- "objects": datasets.Sequence(
76
- {
77
- "id": datasets.Value("int64"),
78
- "area": datasets.Value("int64"),
79
- "bbox": datasets.Sequence(datasets.Value("float32"), length=4),
80
- "segmentation": datasets.Sequence(datasets.Sequence(datasets.Value("float32"))),
81
- "category": datasets.ClassLabel(names=_CATEGORIES),
82
- }
83
- ),
84
- }
85
- )
86
- return datasets.DatasetInfo(
87
- features=features,
88
- homepage=_HOMEPAGE,
89
- citation=_CITATION,
90
- license=_LICENSE,
91
- )
92
-
93
- def _split_generators(self, dl_manager):
94
- data_files = dl_manager.download_and_extract(self.config.data_urls)
95
- return [
96
- datasets.SplitGenerator(
97
- name=datasets.Split.TRAIN,
98
- gen_kwargs={
99
- "folder_dir": data_files["train"],
100
- },
101
- ),
102
- datasets.SplitGenerator(
103
- name=datasets.Split.VALIDATION,
104
- gen_kwargs={
105
- "folder_dir": data_files["validation"],
106
- },
107
- ),
108
- datasets.SplitGenerator(
109
- name=datasets.Split.TEST,
110
- gen_kwargs={
111
- "folder_dir": data_files["test"],
112
- },
113
- ),
114
- ]
115
-
116
- def _generate_examples(self, folder_dir):
117
- def process_annot(annot, category_id_to_category):
118
- return {
119
- "id": annot["id"],
120
- "area": annot["area"],
121
- "bbox": annot["bbox"],
122
- "segmentation": annot["segmentation"],
123
- "category": category_id_to_category[annot["category_id"]],
124
- }
125
-
126
- image_id_to_image = {}
127
- idx = 0
128
-
129
- annotation_filepath = os.path.join(folder_dir, _ANNOTATION_FILENAME)
130
- with open(annotation_filepath, "r") as f:
131
- annotations = json.load(f)
132
- category_id_to_category = {category["id"]: category["name"] for category in annotations["categories"]}
133
- image_id_to_annotations = collections.defaultdict(list)
134
- for annot in annotations["annotations"]:
135
- image_id_to_annotations[annot["image_id"]].append(annot)
136
- filename_to_image = {image["file_name"]: image for image in annotations["images"]}
137
-
138
- for filename in os.listdir(folder_dir):
139
- filepath = os.path.join(folder_dir, filename)
140
- if filename in filename_to_image:
141
- image = filename_to_image[filename]
142
- objects = [
143
- process_annot(annot, category_id_to_category) for annot in image_id_to_annotations[image["id"]]
144
- ]
145
- with open(filepath, "rb") as f:
146
- image_bytes = f.read()
147
- yield idx, {
148
- "image_id": image["id"],
149
- "image": {"path": filepath, "bytes": image_bytes},
150
- "width": image["width"],
151
- "height": image["height"],
152
- "objects": objects,
153
- }
154
- idx += 1
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
split_name_to_num_samples.json DELETED
@@ -1 +0,0 @@
1
- {"valid": 25, "train": 128, "test": 36}